Categories
Mil-Std-882E Safety Analysis

System Safety Engineering Process

The System Safety Engineering Process – what it is and how to do it.

This is the full-length (50-minute) session on the System Safety Process, which is called up in the general requirements of Mil-Std-882E. I cover the Applicability of Mil-Std-882E tasks, the General Requirements, the Process with eight elements, and the Application of process theory to the real world. 

You Will Learn to:

  • Know the system safety process iaw Mil-Std-882E;
  • List and order the eight elements;
  • Understand how they are applied;
  • Skilfully apply system safety using realistic processes; and
  • Feel more confident dealing with this and other standards.
System Safety Process – this is the free demo.

Topics: System Safety Engineering Process

  • Applicability of Mil-Std-882E tasks;
  • General requirements;
  • Process with eight elements; and
  • Application of process theory to the real world

Transcript: Preliminary Hazard Identification

CLICK HERE for the Transcript

System Safety Process

Hi, everyone, and welcome to the Safety Artisan. I’m Simon, your host. Today I’m going to be using my experience with System Safety Engineering to talk you through the process that we need to follow to achieve success. Because to use a corny saying, ‘Safety doesn’t happen by accident’. Safety is what we call an emergent property. And to get it, we need to decide what we mean by safety, decide what our goals are, and then work out how we’re going to get there. It’s a planned systematic activity. Especially if we’re going to deal with very complex projects or situations. Times where there is a requirement to make that understanding and that planning explicit. Where the requirement becomes the difference between success and failure. Anyway, that’s enough of that. Let’s get on and look at the session.

Military Standard 882E, Section 4 General Requirements

Today we’re talking about System Safety Process. To help us do that, we’re going to be looking at a particular standard – the general requirements of that standard. And those are from Section Four of Military Standard 882E. But don’t get hung up on which standard it is. That’s not the point here. It’s a means to an end. I’ll talk about other standards and how we perform system safety engineering in different domains.

Learning Objectives

Our learning objectives for today are here. In this session, you will learn, or you’ll know, the system safety process in accordance with that Mil. Standard. You will be able to list and order the eight elements of the process. You will understand how to apply the eight elements. And you will be able to apply system safety with some skill using realistic processes. We’re going to spend quite a bit of time talking about how it’s actually done vs. how it appears on a sheet of paper. Also known as how it appears written in a standard. So, we’re going to talk about doing it in the real world. At the end of all that, you will be able to feel more confident dealing with multiple different standards.

The focus is not on this military standard, but on understanding the process. The fundamentals of what we’re trying to achieve and why. Then you will be able to extrapolate those principles to other standards. And that should help you to understand whatever it is you’re dealing with. It doesn’t have to be Mil. Standard 882E.

Contents of this Session

We’ve got four sets of contents in the session. First of all, I’m going to talk about the applicability of Military Standard 882E. From the standard itself and the tasks (you’ll see why that’s important) to understanding what you’re supposed to do. Then other standards later on. I’m going to talk about those general requirements that the standard places on us to do the work. A big part of that is looking at a process following the eight elements. And finally, we will apply that theory of how the process should work to the real world. And that will include learning some real-world lessons. You should find these useful for all standards and all circumstances.

So, it just remains for me to say thank you very much for listening. You can find a free pdf of the System Safety Engineering Standard, Mil-Std-882E, here.

Categories
Mil-Std-882E Safety Analysis System Safety

How to Understand Safety Standards

Learn How to Understand Safety Standards with this FREE session from The Safety Artisan.

In this module, Understanding Your Standard, we’re going to ask the question: Am I Doing the Right Thing, and am I Doing it Right? Standards are commonly used for many reasons. We need to understand our chosen system safety engineering standard, in order to know: the concepts, upon which it is based; what it was designed to do, why and for whom; which kinds of risk it addresses; what kinds of evidence it produces; and it’s advantages and disadvantages.

Understand Safety Standards : You’ll Learn to

  • List the hazard analysis tasks that make up a program; and
  • Describe the key attributes of Mil-Std-882E. 
Understanding Your Standard

Topics:  Understand Safety Standards

Aim: Am I Doing the Right Thing, and am I Doing it Right?

  • Standards: What and Why?
  • System Safety Engineering pedigree;
  • Advantages – systematic, comprehensive, etc:
  • Disadvantages – cost/schedule, complexity & quantity not quality.

Transcript: Understand Safety Standards

Click here for the Transcript on Understanding Safety Standards

In Module Three, we’re going to understand our Standard. The standard is the thing that we’re going to use to achieve things – the tool. And that’s important because tools designed to do certain things usually perform well. But they don’t always perform well on other things. So we’re going to ask ‘Are we doing the right thing?’ And ‘Are we doing it right?’

What and Why?

So, what are we going to do, and why are we doing it? First of all, the use of standards in safety is very common for lots of reasons. It helps us to have confidence that what we’re doing is good enough. We’ve met a standard of performance in the absolute sense. It helps us to say, ‘We’ve achieved standardization or commonality in what we’re doing’. And we can also use it to help us achieve a compromise. That can be a compromise across different stakeholders or across different organizations. And standardization gives us some of the other benefits as well. If we’re all doing the same thing rather than we’re all doing different things, it makes it easier to train staff. This is one example of how a standard helps.

However, we need to understand this tool that we’re going to use. What it does, what it’s designed to do, and what it is not designed to do. That’s important for any standard or any tool. In safety, it’s particularly important because safety is in many respects intangible. This is because we’re always looking to prevent a future problem from occurring. In the present, it’s a little bit abstract. It’s a bit intangible. So, we need to make sure that in concept what we’re doing makes sense and is coherent. That it works together. If we look at those five bullet points there, we need to understand the concept of each standard. We need to understand the basis of each one.

And they’re not all based on the same concept. Thus some of them are contradictory or incompatible. We need to understand the design of the standard. What the standard does, what the aim of the standard is, why it came into existence. And who brought it into existence. To do what for who – who’s the ultimate customer here?

And for risk analysis standards, we need to understand what kind of risks it addresses. Because the way you treat a financial risk might be very different from a safety risk. In the world of finance, you might have a portfolio of products, like loans. These products might have some risks associated with them. One or two loans might go bad and you might lose money on those. But as long as the whole portfolio is making money that might be acceptable to you. You might say, ‘I’m not worried about that 10% of my loans have gone south and all gone wrong. I’m still making plenty of profit out of the other 90%’. It doesn’t work that way with safety. You can’t say ‘It’s OK that I’ve killed a few people over here because all this a lot over here are still alive!’. It doesn’t work like that!

Also, what kind of evidence does the standard produce? Because in safety, we are very often working in a legal framework that requires us to do certain things. It requires us to achieve a certain level of safety and prove that we have done so. So, we need certain kinds of evidence. In different jurisdictions and different industries, some evidence is acceptable. Some are not. You need to know which is for your area.

And then finally, let’s think about the pros and cons of the standard, what does it do well? And what does it do not so well?

System Safety Pedigree

We’re going to look at a standard called Military Standard 882E. Many decades ago, this standard developed was created by the US government and military to help them bring into service complex-cutting edge military equipment. Equipment that was always on the cutting edge. That pushed the limits of what you could achieve in performance.

That’s a lot of complexity. Lots of critical weapon systems, and so forth. And they needed something that could cope with all that complexity. It’s a system safety engineering standard. It’s used by engineers, but also by many other specialists. As I said, it’s got a background from military systems. These days you find these principles used pretty much everywhere. So, all the approaches to System Safety that 882 introduced are in other standards. They are also in other countries.

It addresses risks to people, equipment, and the environment, as we heard earlier. And because it’s an American standard, it’s about system safety. It’s very much about identifying requirements. What do we need to happen to get safety? To do that, it produces lots of requirements. It performs analyses in all those requirements and generates further requirements. And it produces requirements for test evidence. We then need to fulfill these requirements. It’s got several important advantages and disadvantages. We’re going to discuss these in the next few slides.

Comprehensive Analysis

Before we get to that, we need to look at the key feature of this standard. The strengths and weaknesses of this standard come from its comprehensive analysis. And the chart (see the slide) is meant to show how we are looking at the system from lots of different perspectives. (It’s not meant to be some arcane religious symbol!) So, we’re looking at a system from 10 different perspectives, in 10 different ways.

Going around clockwise, we’ve got these ten different hazard analysis tasks. First of all, we start off with preliminary hazard identification. Then preliminary hazard analysis. We do some system requirements hazard analysis. So, we identify the safety requirements that the system is going to meet so that we are safe. We look at subsystem and system hazard analysis. At operating and support hazard analysis – people working with the system. Number seven, we look at health hazard analysis – Can the system cause health problems for people? Functional hazard analysis, which is all about what it does. We’re thinking of sort of source software and data-driven functionality. Maybe there’s no physical system, but it does stuff. It delivers benefits or risks. System of systems hazard analysis – we could have lots of different and/or complex systems interacting. And then finally, the tenth one – environmental hazard analysis.

If we use all these perspectives to examine the system, we get a comprehensive analysis of the system. From this analysis, we should be confident that we have identified everything we need to. All the hazards and all the safety requirements that we need to identify. Then we can confidently deliver an appropriate safe system. We can do this even if the system is extremely complex. The standard is designed to deal with big, complex cutting-edge systems.

Advantages #1

In fact, as we move on to advantages, that’s the number one advantage of this standard. If we use it and we use all 10 of those tasks, we can cope with the largest and the most demanding programs. I spent much of my career working on the Eurofighter Typhoon. It was a multi-billion-dollar program. It cost hundreds of billions of dollars, four different nations worked together on it. We used a derivative of Mil. Standard 882 to look at safety and analyze it. And it coped. It was powerful enough to deal with that gigantic program. I spent 13 years of my life on and off on that program so I’d like to think that I know my stuff when we’re talking about this.

As we’ve already said, it’s a systematic approach to safety. Systems, safety, engineering. And we can start very early. We can start with early requirements – discovery. We don’t even need a design – we know that we have a need. So we can think about those needs and analyze them.

And it can cover us right through until final disposal. And it covers all kinds of elements that you might find in a system. Remember our definition of ‘system’? It’s something that consists of hardware, software, data, human beings, etc. The standard can cope with all the elements of a system. In fact, it’s designed into the standard. It was specifically designed to look at all those different elements. Then to get different insights from those elements. It’s designed to get that comprehensive coverage. It’s really good at what it does. And it involves, not just engineers, but people from all kinds of other disciplines. Including operators, maintainers, etc, etc.

I came from a maintenance background. I was either directly or indirectly supporting operators. I was responsible for trying to help them get the best out of their system. Again, that’s a very familiar world to me. And rigorous standards like this can help us to think rigorously about what we’re doing. And so get results even in the presence of great complexity, which is not always a given, I must say.

So, we can be confident by applying the standard. We know that we’re going to get a comprehensive and thorough analysis. This assures us that what we’re doing is good.

Advantages #2

So, there’s another set of advantages. I’ve already mentioned that we get assurance. Assurance is ‘justified confidence’. So we can have high confidence that all reasonably foreseeable hazards will be identified and analyzed. And if you’re in a legal jurisdiction where you are required to hit a target, this is going to help you hit that target.

The standard was also designed for use in contracts. It’s designed to be applied to big programs. We’d define that as where we are doing the development of complex high-performance systems. So, there are a lot of risks. It’s designed to cope with those risks.

Finally, the standard also includes requirements for contracting, for interfaces with other systems, for interfaces with systems engineering. This is very important for a variety of disciplines. It’s important for other engineering and technical disciplines. It’s important for non-technical disciplines and for analysis and recordkeeping. Again, all these things are important, whether it is for legal reasons or not. We need to do recordkeeping. We need to liaise with other people and consult with them. There are legal requirements for that in many countries. This standard is going to help us do all those things.

But, of course, in a standard everything has pros and cons and Mil. Standard 882 is no exception. So, let’s look at some of the disadvantages.

Disadvantages #1

First of all, a full system safety program might be overkill for the system that you want to use, or that you want to analyze.  The Cold War, thank goodness, is over; generally speaking, we’re not in the business of developing cutting-edge high-performance killing machines that cost billions and billions of dollars and are very, very risky. These days, we tend to reduce program risk and cost by using off-the-shelf stuff and modifying it. Whether that be for military systems, infrastructure in the chemical industry, transportation, whatever it might be. Very much these days we have a family of products and we reuse them in different ways. We mix and match to get the results that we want.

And of course, all this comprehensive analysis is not cheap and it’s not quick. It may be that you’ve got a program that is schedule-constrained. Or you want to constrain the cost and you cannot afford the time and money to throw a full 882 program at it. So, that’s a disadvantage.

The second family of problems is that these kinds of safety standards have often been applied prescriptively. The customer would often say, ‘Go away and go and do this. I’m going to tell you what to do based on what I think reduces my risk’. Or at least it covers their backside. So, contractors got used to being told to do certain things by purchasers and customers. The customers didn’t understand the standards that they were applying and insisting upon. So, the customers did not understand how to tailor a safety standard to get the result that they wanted. So they asked for dumb things or things that didn’t add value. And the contractors got used to working in that kind of environment. They got used to being told what to do and doing it because they wouldn’t get paid if they didn’t. So, you can’t really blame them.

But that’s not great, OK? That can result in poor behaviors. You can waste a lot of time and money doing stuff that doesn’t actually add value. And everybody recognizes that it doesn’t add value. So you end up bringing the whole safety program into disrepute and people treat it cynically. They treat it as a box-ticking exercise. They don’t apply creativity and imagination to it. Much less determination and persistence. And that’s what you need for a good effective system safety program. You need creativity. You need imagination. You need people to be persistent and dedicated to doing a good job. You need that rigor so that you can have the confidence that you’re doing a good job because it’s intangible.

Disadvantages #2

Let’s move onto the second kind of family of disadvantages. And this is the one that I’ve seen the most, actually, in the real world. If you do all 10 tasks and even if you don’t do all 10, you can create too many hazards. If you recall the graphic from earlier, we have 10 tasks. Each task looks at the system from a different angle. What you can get is lots and lots of duplication in hazard identification. You can have essentially the same hazards identified over and over again in each task. And there’s a problem with that, in two ways.

First of all, quality suffers. We end up with a fragmented picture of hazards. We end up with lots and lots of hazards in the hazard log, but not only that. We get fragments of hazards rather than the real thing. Remember I said those tests for what a hazard really is? Very often you can get causes masquerading as hazards. Or other things that that exacerbating factors that make things worse. They’re not a hazard in their own right, but they get recorded as hazards. And that problem results in people being unable to see the big picture of risk. So that undermines what we’re trying to do. And as I say, we get lots of things misidentified and thrown into the pot. This also distracts people. You end up putting effort into managing things that don’t make a difference to safety. They don’t need to be managed. Those are the quality problems.

And then there are quantity problems. And from personal experience, having too many hazards is a problem in itself.  I’ve worked on large programs where we were managing 250 hazards or thereabouts. That is challenging even with a sizable, dedicated team. That is a lot of work in trying to manage that number of hazards effectively. And there’s always the danger that it will slide into becoming a box-ticking exercise. Superficial at best.

I’ve also seen projects that have two and a half thousand hazards or even 4000 hazards in the hazard log. Now, once you get up to that level, that is completely unmanageable. People who have thousands of hazards in a hazard log and they think they’re managing safety are kidding themselves. They don’t understand what safety is if they think that’s going to work. So, you end up with all these items in your hazard log, which become a massive administrative burden. So people end up taking shortcuts and the real hazards are lost. The real issues that you want to focus on are lost in the sea of detail that nobody will ever understand. You won’t be able to control them.

Unfortunately, Mil. Standard 882 is good at generating these grotesque numbers of hazards. If you don’t know how to use the standard and don’t actively manage this issue, it gets to this stage. It can go and does go, badly wrong. This is particularly true on very big programs. And you really need clarity on big projects.

Summary of Module

Let’s summarize what we’ve done with this module. The aim was to help us understand whether we’re doing the right thing and whether we’ve done it right. And standards are terrific for helping us to do that. They help us to ensure we’re doing the right thing. That we’re looking at the right things. And they help us to ensure that we’re doing it rigorously and repeatedly. All the good quality things that we want. And Mil. Standard 882E that we’re looking at is a system safety engineering standard. So it’s designed to deal with complexity and high-performance and high-risk. And it’s got a great pedigree. It’s been around for a long time.

Now that gives advantages. So, we have a system safety program with this standard that helps us to deal with complexity. That can cope with big programs, with lots of risks. That’s great.

The disadvantages of this standard are that if we don’t know how to tailor or manage it properly, it can cost a lot of money. It can take a lot of time to give results which can cause problems for the program. And ultimately, you can accidentally ignore safety if you don’t deliver on time. And it can generate complexity. And it can generate a quantity of data that is so great that it actually undermines the quality of the data. It undermines what we’re trying to achieve. In that, we get a fragmented picture in which we can’t see the true risks. And so we can’t manage them effectively. If we get it wrong with this standard, we can get it really wrong. And that brings us to the end of this module.

This is Module 3 of SSRAP

This is Module 3 from the System Safety Risk Assessment Program (SSRAP) Course. Risk Analysis Programs – Design a System Safety Program for any system in any application. You can access the full course here.

You can find more introductory lessons at Start Here.

Categories
Mil-Std-882E Safety Analysis

System Safety Risk Assessment

Learn about System Safety Risk Assessment with The Safety Artisan.

In this module, we’re going to look at how we deal with the complexity of the real world. We do a formal risk analysis because real-world scenarios are complex. The Analysis helps us to understand what we need to do to keep people safe. Usually, we have some moral and legal obligation to do it as well. We need to do it well to protect people and prevent harm to people.

You Will Learn to:

  • Explain what a system safety approach is and does; and
  • Define what a risk analysis program is; 
System Safety Risk Analysis.

Topics: System Safety Risk Assessment

Aim: How do we deal with real-world complexity?

  • What is System Safety?
  • The Need for Process;
  • A Realistic, Useful, Powerful process:
    • Context, Communication & Consultation; and
    • Monitoring & Review, Risk Treatment.
  • Required Risk Reduction.

Transcript: System Safety Risk Assessment

Click here for the Transcript on System Safety Risk Assessment

In this module, on System Safety Risk Assessment, we’re going to look at how we deal with the complexity of the real world. We do a formal risk analysis because real-world scenarios are complex. The Analysis helps us to understand what we need to do to keep people safe. Usually, we have some moral and legal obligation to do it as well. We need to do it well to protect people and prevent harm to people.

What is System Safety?

To start with, here’s a little definition of system safety. System safety is the application of engineering and management principles, criteria, and techniques to achieve acceptable risk within a wider context. This wider context is operational effectiveness – We want our system to do something. That’s why we’re buying it or making it. The system has got to be suitable for its use. We’ve got some time and cost constraints and we’ve got a life cycle. We can imagine we are developing something from concept, from cradle to grave.

And what are we developing? We’re developing a system. An organization of hardware, (or software) material, facilities, people, data and services. All these pieces will perform a designated function within the system. The system will work within a stated or defined operating environment. It will work with the intention to produce specified results.

We’ve got three things there. We’ve got a system. We’ve got the operating environment within which it works- or designed to work. And we have the thing that it’s supposed to produce; its function or its application. Why did we buy it, or make, it in the first place? What’s it supposed to do? What benefits is it supposed to bring humankind? What does it mean in the context of the big picture?

That’s what a system is. I’m not going to elaborate on systems theory or anything like that. That’s a whole big subject on its own. But we’re talking about something complex. We’re not talking about a toaster. It’s not consumer goods. It’s something complicated that operates in the real world. And as I say, we need to understand those three things – system, environment, purpose – to work out Safety.

We Need A Process

We’ve sorted our context. How is all this going to happen? We need a process. In the standard that we’re going to look at in the next module, we have an eight-element process. As you can see there, we start with documenting our approach. Then we identify and document hazards. We document everything according to the standard so forget that.

We assess risk. We plan how we’re going to mitigate the risk. We identify risk mitigation measures or controls as there are often known. Then we apply those controls to reduce risk. We verify and confirm that the risk reduction that we have achieved, or that we believe we will achieve. And then we got to get somebody to accept that risk. In other words, to say that it is an acceptable level of risk. That we can put up with this level of risk in exchange for the benefits that the system is going to give us. Finally, we need to manage risk through the entire lifecycle of the system until we finally get rid of it.

The key point about this is whatever process we follow, we need to approach it with rigor. We stick to a systematic process. We take a structured and rigorous approach to looking at our system.

And as you can see there from the arrows, every step in the eight-element sequence flows into the next step. Each step supports and enables the following steps. We document the results as we go. However, even this example is a little bit too simple.

A More Realistic Process

So, let’s get a more realistic process. What we’ve got here are the same things we’ve had before. We’ve established the context at the beginning. Next, there’s risk assessment. Risk assessment consists of risk identification, risk analysis, and risk evaluation. It asks ‘Where are we?’ in relation to a yardstick or framework that categorizes risk. The category determines whether a risk is acceptable or not.

After determining whether the risk is acceptable or not, we may need to apply some risk treatment. Risk Treatment will reduce the risk further. By then we should have the risk down to an acceptable level.

So, that’s the straight-through process, once through. In the real world, we may have to go around this path several times. Having treated the risk over a period of time, we need to monitor and review it. We need to make sure that the risk turns out, in reality, to be what we estimated it to be. Or at least no worse. If it turns out to be better- Well, that’s great!

And on that monitoring and review cycle, maybe we even need to go back because the context has changed. These changes could include using the system to do something it was not designed to do. Or modifying the system to operate in a wider variety of environments. Whatever it might be, the context has changed. So, we need to look again at the risk assessment and go round that loop again.

And while we’re doing all that, we need to communicate with other people. These other people include end-users, stakeholders, other people who have safety responsibilities. We need to communicate with the people who we have to work with. And we have to consult people. We may have to consult workers. We may have to consult the public, people that we put at risk, other duty holders who hold a duty to manage risk. That’s our cycle. That’s more realistic. In my experience as a safety engineer, this is much more realistic. A once-through process often doesn’t cut it.

Required Risk Reduction

We’re doing all this to drive risk down to an acceptable level. Well, what do we mean by that? Well, there are several different ways that we can do this, and I’ve got to illustrate it here. On the left-hand side of the slide, we have what’s usually known as the ALARP triangle. It’s this thing that looks a bit like a carrot where the width of the triangle indicates the amount of risk. So, at the top of the triangle, we’ve got lots of risks. And if you’re in the UK or Australia where I live, this is the way it’s done. So there will be some level of risk that is intolerable. Then if the risk isn’t intolerable, we can only tolerate it or accept it if it is ALARP or SFARP. And ALARP means that we’ve reduced the risk as low as reasonably practicable. And SFARP means so far as is reasonably practicable. Essentially, they’re the same thing – reasonably practical.

We must ensure that we have applied all reasonably practicable risk reduction measures. And once we’ve done so, if we’re in this tolerable or acceptable region, then we can live with the risk. The law allows us to do that.

That’s how it’s done in the UK and Australia. But in other jurisdictions, like the USA, you might need to use a different approach. A risk matrix approach as we can see on the right-hand side of this slide. This particular risk matrix is from the standard we’re about to look at. And we could take that and say, ‘We’ve determined what the risk is. There is no absolute limit on how much risk we can accept. But the higher the risk, the more senior level of sign-off from management we need’. In effect, you are prioritizing the risk. So you only bring the worst risks to the attention of senior management. You are asking  ‘Will you accept this? Or are you prepared to spend the money? Or will you restrict the operational system to reduce the risk?’. This is good because it makes people with authority consider risks. They are responsible and need to make meaningful decisions.

In short, different approaches are legal in different jurisdictions.

Summary of Module

In Module Two, we’ve asked ourselves, ‘How can we deal with real-world complexity?’. And one way that’s developed to do that is System Safety. System Safety is where we take a systematic approach to safety. This approach applies to both the system itself – the product – and the process of System Safety.

We address product and process. We need that rigorous process to give us confidence that what we’ve done is good enough. We have a realistic, useful and powerful process that enables us to put things in context. It helps us to communicate with everyone we need to, to consult with those that we have a duty to consult with. And also, we put around the basic risk process, this monitoring and review. And of course, we analyze risk to reduce it to acceptable levels. So we’ve got to treat the risk or reduce it or control it in some way to get it to those acceptable levels. In the end, it’s all about getting that required risk reduction to work. That reduction makes the risk acceptable to expose human beings to, for the benefit that it will give us.

This is Module 2 of SSRAP

This is Module 2 from the System Safety Risk Assessment Program (SSRAP) Course. Risk Analysis Programs – Design a System Safety Program for any system in any application. You can access the full course here.

You can find more introductory lessons at Start Here.

Categories
Safety Analysis

Risk Analysis Programs

Risk Analysis Programs – Design a System Safety Program for any system in any application.

Introduction to the System Safety Risk Analysis Programs Course.

Risk Analysis Programs: Learning Objectives

At the end of this course, you will be able to:

  • Describe fundamental risk concepts;
  • Explain what a system safety approach is and does;
  • Define what a risk analysis program is;
  • List the hazard analysis tasks that make up a program;
  • Select tasks to meet your needs;
  • Design a tailored analysis program for any application; and
  • Know how to get more information and resources.
Risk Analysis Programs: Click Here for the Transcript

Hello and welcome to this course on Systems Safety Risk Analysis Programs. I’m Simon Di Nucci, The Safety Artisan, and I’ve been a safety engineer and consultant for over 20 years. And I have worked on a wide range of safety programs doing risk analysis on all kinds of things. Ships, planes, trains, air traffic management systems, software systems, you name it. I’ve worked in the U.K., in Australia, and on many systems from the US. I have also spent hundreds of hours training hundreds of people on safety. And now I’ve got the opportunity to share some of that knowledge with you online.

So, what are the benefits of this course?

First of all, you will learn about basic concepts. About system safety, what it is and what it does. You will know how to apply a risk analysis program to a very complex system and how to manage that complexity. So, that’s what you’ll know.

At the end of the course, you will also be able to do things that you might not have been able to do before. You will be able to take the elements of a risk analysis program and the different tasks. You’ll be able to select the right tasks and form a program to suit your application, whatever it might be. Whether you might have a full, high-risk bespoke development system, or you’re taking a commercial system off the shelf and doing something new with it. You might be taking a product and using it in a new application or a new location. Whatever it might be, you will learn how to tailor your risk analysis program. This program will give you the analyses you need. And to meet your legal and regulatory requirements. Once you’ve learned how to do this, you can apply it to almost any system.

Finally, you will feel confident doing this. I will be interpreting the terminology used in the tasks and applying my experience. So, instead of reading the standard and being unsure of your interpretation, you can be sure of what you need to do. Also, I will show you how you can get good results and avoid some of the pitfalls.

So, these are the three benefits of the program.

1) You will know what to do.

2) You will be able to perform risk program tasks, and …

3) You’ll feel confident doing those tasks.

At the end of the course, I will also show you where to find further resources. There are free resources to choose from. But there are also paid resources for those who want to take their studies to the next level. I hope you enjoy the course.

Get the supporting safety analysis courses here.

Categories
Safety Analysis

Environmental Hazard Analysis

This is the full-length (one hour) session on Environmental Hazard Analysis (EHA), which is Task 210 in Mil-Std-882E. I explore the aim, task description, and contracting requirements of this Task, but this is only half the video. In the commentary, I then look at environmental requirements in the USA, UK, and Australia, before examining how to apply EHA in detail under the Australian/international regime. This uses my practical experience of applying EHA. 

You Will Learn to:

  • Conduct EHA according to the standard;
  • Record EHA results correctly;
  • Contract for EHA successfully;
  • Be aware of the regulatory scene in the US, UK, and Australia;
  • Appreciate the complexities of conducting EHA in Australia; and
  • Recognize when your EHA program requires specialist support.
This is the seven-minute demo of the full-length (one hour) session on Environmental Hazard Analysis.

Topics: Environmental Hazard Analysis

  • Environmental Hazard Analysis (EHA) Purpose;
  • Task Description (7+ slides);
  • Documentation, HAZMAT & Contracting (2 slides each);
  • Commentary (8 slides); and
  • Conclusion.
Transcript: Preliminary Hazard Identification

Environmental Hazard Analysis – Full Version

Introduction

Hi, everyone, and welcome to the Safety Artisan. Today, we’re going to be talking about Environmental Hazard Analysis – Big topic! I’m covering this as part of the series on the System Safety Engineering Standard – Mil. Standard 882E. But it doesn’t really matter what standard we are using the topic is still relevant.

So, Environmental has analysis – it’s a big topic because we’re going to cover everything, not just hazards. At the end of this session, you should be able to enjoy three benefits. First of all, you should know how to approach Environmental Hazards Analysis both from the point of view of the requirements, the Hazard Analysis itself (the process), and some national and international variations in the English-speaking world. So, you should know how to do the basics and also recognize when maybe you need to bring in a specialist. But maybe most important of all, number three is you should have the confidence to be able to get started. So I’m hoping that this session is really going to help you get started, know what you can do, and then maybe recognize when you need to bring in some specialist help or go and seek some further information.

As you’ll see, it’s a big, complex subject. I can get you started today, but that’s all I can do in one session. In fact, I think that’s all anyone can do in one session. Anyway, let’s get on with it and see what we’ve got.

Environmental Hazard Analysis, Mil. Standard 882E Task 210

Environmental Hazard Analysis, which is Task 210 under Mil. Standard 882E. So’ let’s look at what we’re going to talk about today.

Topics for this Session

You’ll see why it’s going to be quite a lengthy session. I think it will last an hour because we’re going to go through the Purpose and Task Description of Environmental Hazard Analysis as set out in the Mil. Standard. And it says seven-plus slides because there are seven mainstream slides plus some illustrations in there as well. Then we’ve got a couple of slides each on Documentation, Hazardous Materials or HAZMAT, and Contracting. And then eight slides of Commentary and this is the major value add because I’ll be talking about applying Environmental Hazard Analysis in a US, UK, and Australian jurisdiction under the different laws, which I have some experience of.

I worked closely with environmental specialists on the Eurofighter Typhoon project, and I’ve also worked closely with the same specialists on US programs that had been bought by different countries. Finally, I’ve been closely involved in a major environmental – or safety and environmental – project here in Australia. I’ve been exposed and learned the hard way about how things work or don’t work here in Australia. So I’ve got some relevant experience to share with you, as well as some learned material to share with you. And then a little Conclusion, because I say this will take us an hour so there’s quite a lot of material to cover. So, let’s get right on with it.

EHA

So the purpose of Environmental Hazards Analysis, or EHA, as it says, is to support design development decisions. Now all of the 882 tasks are meant to do this, but actually, the wording in Task 210 is the clearest of all of them. Really makes it explicit what we’re trying to do, which is excellent.

So we’re going to identify hazards throughout the life cycle – cradle to grave, whatever system it is. And we’re going to document and record those hazards and their leading particulars within the Hazard Tracking System or Hazard Log, as we more often call it. We’re going to manage the hazards using the same system safety process in Section Four as we use for safety – and this is the process that you will have heard and the other lessons that I’ve been giving. And very often under 882, Safety and Environmental Hazards are considered together. There are pros and cons with that approach, but nevertheless, a lot of the work is common. We’ll see why later on.

And in this American standard, it says we are to provide specific data to support the National Environmental Policy Act and executive order requirements. So the NEPA is an American piece of legislation and therefore I use this colour blue to indicate anything that’s an American-specific requirement. If you’re not operating in America, you’ll need to find the equivalent to manage to and to comply with. Moving on.

Task Description (T210) #1

Let’s start going through the task description. So really excellent words here “integrating environmental considerations into the systems engineering process”. And as I’ve said repeatedly in this series, we need the systems engineering process to give us the context in order to route the safety engineering process, to give us traction, and keep us connected to the real world. And it’s great that the EHA task recognizes that and explicitly says that’s what this is about. So it’s good guidance for people who use it.

We’re assuming that a contractor is going to follow the EHA process, but whoever it is, they need to start early. So that once the systems engineering process is initiated, whoever is doing the analysis will start identifying and managing hazards in the requirements phase, ideally, and then using those system safety processes to… [buy the video to get the full transcript].

So, it just remains for me to say thank you very much for listening. And that’s been Environmental Hazard Analysis. So, thank you very much and goodbye.

Links: Environmental Hazard Analysis

The links mentioned in the video are here:

You can find a free pdf of the System Safety Engineering Standard, Mil-Std-882E, here.

Categories
Mil-Std-882E Safety Analysis

System of Systems Hazard Analysis

In this full-length (38-minute) session, The Safety Artisan looks at System of Systems Hazard Analysis, or SoSHA, which is Task 209 in Mil-Std-882E. SoSHA analyses collections of systems, which are often put together to create a new capability, which is enabled by human brokering between the different systems. We explore the aim, description, and contracting requirements of this Task, and an extended example to illustrate SoSHA. (We refer to other lessons for special techniques for Human Factors analysis.)

This is the seven-minute demo version of the full 38-minute video.

System of Systems Hazard Analysis: Topics

  • System of Systems (SoS) HA Purpose;
  • Task Description (2 slides);
  • Documentation (2 slides);
  • Contracting (2 slides);
  • Example (7 slides); and
  • Summary.

Transcript: System of Systems Hazard Analysis

Click here for the Transcript

Introduction

Hello everyone and welcome to the Safety Artisan. I’m Simon and today we’re going to be talking about System of Systems Hazard Analysis – a bit of a mouthful that. What does it actually mean? Well, we shall see.

System of Systems Hazard Analysis

So, for Systems of Systems Hazard Analysis, we’re using task 209 as the description of what to do taken from a military standard, 882E. But to be honest, it doesn’t really matter whether you’re doing a military system or a civil system, whatever it might be – if you’ve got a system of systems, then this will help you to do it.

Topics for this Session

Looking at what we’ve got coming up.

So, we look at the purpose of system of systems – and by the way, if you’re wondering what that is what I’m talking about is when we take different things that we’ve developed elsewhere, e.g. platforms, electronic systems, whatever it might be, and we put them together. Usually, with humans gluing the system together somewhere, it must be said, to make it all tick and fit together. Then we want this collection of systems to do something new, to give us some new capability, that we didn’t have before. So, that’s what I’m talking about when I say a system of systems. I’ll show you an example – it’s the best way. So, we’ve got a couple of slides on task description, a couple of slides or documentation, and a couple of slides on contracting. Tasks 209 is a very short task, and therefore I’ve decided to go through an example.

So, we’ve got seven slides of an example of a system of systems, safety case, and safety case report that I wrote. And hopefully, that will illustrate far better than just reading out the description. And that will also give us some issues that can emerge with systems of systems and I’ll summarize those at the end.

SOSHA Purpose

So, let’s get on. I’m going to call it the SOSHA for short; Systems of Systems Hazard Analysis. The purpose of the SOSHA, task 209, is to document or perform and document the analysis of the system of systems and identify unique system of systems hazards. So, things we don’t get from each system in isolation. This task is going to produce special requirements to deal with these hazards, which otherwise would not exist. Because until we put the things together and start using them for something new – We’ve not done this before.

Task Description (T209) #1

Task description: As in all of these tasks, the contractor shall perform and document an analysis of the system of systems to identify hazards and mitigation requirements. A big part of this, as I said earlier, we tend to use people to glue these collections, these portfolios, of systems together and humans are fantastic at doing that. Not always the ideal way of doing it, but sometimes it’s the only way of doing it within the constraints that we’ve got. The human is very important. The human will receive inputs from one or more systems and initiate outputs within the analysis and in fact within the real world, to be honest, which is what we’re trying to analyse. That’s probably a better way of looking at it.

And we’ve got to provide traceability of all those hazards to – it says – architecture locations, interfaces, data and stakeholders associated with the hazard. This is particularly important because with a system of systems each system tends to come with its own set of stakeholders, its own physical location, its own interfaces, etc, etc. The issue of managing all of those extraneous things and getting the traceability, it goes up. It is multiplied with every system you’ve got. In fact, I would say it was the square of. The example we’ll see: we’ve got three systems being put together in a system of systems and, in effect, we had nine times the amount of work in that area, I would say. I think that’s a reasonable approximation.

Task Description (T209) #2

Part two of the task description: The contractor will assess the risk of each hazard and recommend mitigation measures to eliminate the hazards. Or, very often, we can’t eliminate the hazards to reduce the associated risks. Then, as always with this standard, it says we’re going to use tables one, two and three, which are the severity, probability and the risk matrix that comes with the standard. Unless, of course, we have created or tailored our own matrix. Which we very often should do but it isn’t often done – I’ll have to do a session on how to do tailoring a matrix.

Then the contractor has got to verify and validate the effectiveness of those recommended mitigation measures. Now, that’s a really good point and I often see that missed. People come up with control measures or mitigation measures but don’t always assess how effective they’re going to be. Sometimes you can’t so we just have to be conservative but it’s not always done as well as it could be.

Documentation (T209) #1

So, let’s move on. Documentation: So, whoever does the analysis- the standard assumes it’s a contractor – shall document the results to include: you’ve got to describe the system of systems, the physical and functional characteristics of the system of systems, which is very important. Capturing these things is not a given. It’s not easy when you’ve got one system, but when you’ve got multiple systems, some of which are being misused to do something they’ve never done before, perhaps, then you’ve got to take extra care.

Then basically it says when you get more detail of the individual systems you need to supply that when it becomes available. Again, that’s important. And not only if the contractor supplies it, who’s going to check it? Who’s going to verify it? Etc., etc.

Documentation (T209) #2

Slide two on documentation: We’ve got to describe the hazard analysis methods and techniques used, providing a description of each method and technique used, and the assumptions and the data used in support. This is important because I’ve seen lots of times where you get a hazard analysis’ results and you only get the results. It’s impossible to verify those results or validate them to say whether they’ve been done in the correct context. And it’s impossible to say whether the results are complete or whether they’re up to date or even whether they were analysing the correct system because often systems come in different versions. So, how do you know that the version being analysed was the version you’re actually going to use? Without that description, you don’t know. So, it’s important to contract for these things.

And then hazard analysis results. What contents and formats do you want? It’s important to say. Also, we’re going to be looking to put the key items, the leading particular’s, from the results. The top-level results are going to go into the hazard tracking system which is more commonly known as a hazard log or a risk register, whatever it might be. Might be an Excel spreadsheet, might be a very fancy database, but whatever it’s going to be you’re going to have to standardize your fields of what things mean. Otherwise, you’re going to have – the data is going to be a mess and a poor quality and not very usable. So, again, you’ve got a contract for these things upfront and make sure you make clear definitions and say what you want.

Contracting #1

Contracting; implicitly, we’ve been talking about contracting already, but this is what a standard says. So, the request for proposal or statement of work has got to include the following. Typically we have an RFP before we’ve got a contract, so we need to have worked out what we need really early in the program or project, which isn’t always done very well. To work out what you need the customer, the purchaser, has probably got to do some analysis of their own in order to work all this stuff out. And I know I say this every time with these tasks, but it is so important. You can’t just dump everything on the contractor and expect them to produce good results because often the contractor is hamstrung. If you haven’t done your homework to help them do their work, then you’re going to get poor results and it’s not their fault.

So, we’ve got to impose the requirement for the task if we want it or need it. We’ve got to identify the functional disciplines. So, which specialists are going to do this work? Because very often the safety team are generalists. They do not have specialist technical knowledge in some of these areas. Or maybe they are not human factor specialists. We need somebody in, some human factor specialists, some user representatives, people who understand how the system will be used in real life and what the real-world constraints are. We need those stakeholders involved – That’s very important. We’ve got to identify those architectures and systems which make up the SOS -very important. The concept of operations. SOS is very much about giving capability. So, it’s all about what are you going to do with the whole thing when you put it together? How’s all that going to work?

Contracting #2

Interesting one, E, which is unique, I think, to task 209, what are the locations of the different systems and how far apart are they? We might be dealing with systems where the distance between them is so great that transmission time becomes an issue for energy or communications. Let’s say you’re bouncing a signal from an aircraft or a drone around the world via a couple of satellites back to home base. There could be a significant lag in communications. So, we need to understand all of these things because they might give rise to hazards or reduce the effectiveness of controls.

Part F; what analysis, methods, techniques do you want to use? And any special data to be used? Again, with these collections of systems that becomes more difficult to specify and more important. And then do we have any specific hazard management requirements? For example, are we using standard definitions and risk matrix from a standard or have we got our own? That all needs to be communicated.

Example #1

So, that is the totality of the task. As you can see, there’s not much to Task 209, so I thought it would be much more helpful to use an example, an illustration, and as they used to say in children’s TV, “Here’s one I made earlier” because a few years ago I had to produce a safety case report. I was the safety case report writer, and there was a small team of us generating the evidence, doing the analysis for the safety case itself.

What we were asked to do is to assure the safety of a system and – in fact, it was two systems but I just treat it as one – of a system for guiding aircraft onto ships in bad weather. So, all of these things existed beforehand. The aircraft were already in service. The ships were already in service. Some of the systems were already in service, but we were putting them together in a new combination. So, we had to take into account human factors. That was very important. We’ll see why in just a moment.

The operating environment, which was quite demanding. So, the whole point is to get the aircraft safely back to the ships in bad weather. They could do it in good weather you could do it visually, but in bad weather, visual wasn’t going to cut it. So, the operating environment- we were being asked to operate in a much more difficult environment. So, that changed everything and drove everything.

We’ve got to consider operating procedures because, as we’re about to see, people are gluing the systems together. So, how do they make it work? And also got to think about maintenance and management. Although in actual fact, we didn’t really consider maintenance and management that much. As an ex-maintainer, this annoys me, but the truth is people are much more focused on getting their capability and service. Often, they think about support as an afterthought. We’ll talk about that one day.

Example #2

Here’s a little demonstration of our system of systems. Bottom right-hand corner, we’ve got the ship with lots of people on the ship. So, if the aircraft crashes into it that could be bad news, not just for the people in the aircraft, but for the people on the ship – big risks there!

We’ve got our radar mounted on the ship so the ship is supplying the radar with power and control and data, telling it where to point for example. Also, the ship might be inadvertently interfering with the radar. There are lots of other electronic systems on the ship. There are bits of the ship getting in the way of the radar, depending on where you’ve put it, and so on and so forth. So, the ship interacts with the radar, the radar interacts with the ship, radars producing radiation. Could that be doing anything to the ship systems?

And then the radar is being operated. Now, I think that symbol is meant to indicate a DJ, but we’ve got the DJ wearing headphones and we got a disk there but it looks like a radar scope to me. So, I’ve just hijacked that. That’s the radar operator who is going to talk to the pilot and give the pilot verbal commands to guide them safely back to the ship. So, that’s how the system works.

In an ideal world, the ship would use the radar and then talk electronically direct to the aircraft and guide it – maybe automatically? That would be a much more sensible setup. In fact, that’s often the way it’s done. But in this particular case, we had to produce a bit of a – I hesitate to call it a lash-up because it was a multi-million-dollar project, but it was a bit of a lash-up.

So, there is the human factors. We’ve got a radar operator doing quite a difficult job and a pilot doing a very difficult job trying to guide their aircraft back onto the ship in bad weather. How are they going to interact and perform? And then lastly, as I alluded to earlier, the aircraft and the ship do actually interact in a limited way. But of course, it’s a physical interaction, so you can actually hurt people and of course, if we get it wrong, the aircraft interacts with the surface of the ocean, which is very bad indeed for the aircraft. So, we’ve got to be careful there. So, there’s a little illustration of our system of systems.

Example #3

And – this is the top-level argument that we came up with – it’s in goal structuring notation. But don’t worry too much about that – We’ll have a session on how to do GSN another time.

So, our top goal, or claim if you like, is that our system of systems is adequately safe for the aircraft to locate and approach the ship. So, that’s a very basic, very simple statement, but of course, the devil is in the detail and all of that detail we call the context. So, surrounding that top goal or claim, we’ve got descriptions of the system, of the aircraft and the ship. We got a definition of what we mean by adequately safe and we’ve got safety targets and reporting requirements.

So, what supports the top goal? We’ve got a strategy and after a lot of consultation and designing the safety argument, we came up with a strategy where we said, “We are going to show that all elements of the system of systems are safe and all the interactions are safe”. To do that, we had to come up with a scope and some assumptions to underpin that as well to simplify things. Again, they sit in the context, we just keep the essence of the argument down the middle.

And then underneath, we’ve got four subgoals. We aim to show that each system equipment is safe to operate, so it’s ready to be operated safely. Then each one is safe in operation so it can be operated safely with real people, etc. And then we’ve got all system safety requirements are satisfied for the whole collection of stuff and then finally that all interactions are safe. So, if we can argue all four of those, we should have covered everything. Now, I suspect if I did this again today, I might do it slightly differently. Maybe a little bit more elegantly, but that’s not the point. The point is, we came up with this and it worked.

Example #4

So, I’m going to unpack each one very briefly, just to illustrate some points. First of all, each component system is safe to operate. Each of these systems, bar one, had all been purchased already, sometimes a long time ago. They all came with their own safety targets, their own risk matrices, etc, etc. So, we had to make sure that when an individual system said, “This is what we’ve got to achieve” that that was good enough for the overall system of systems. So, we had to make sure that each system met its own safety requirements and targets and that they were valid in context.

Now, you would think that double-checking existing systems would be a foregone conclusion. In reality, we discovered that the ship’s communication system and its combat data system were not as robust as assumed. We discovered some practical issues were reported by stakeholders and we also discovered some flaws in previous analysis that had been accepted a long time ago. Now, in the end, those problems didn’t change the price of fish, as we say. It didn’t make a difference to the overall system of systems.

The frailty of the ship’s comms got sorted out and we discovered it didn’t actually matter about the combat system. So, we just assumed that the data coming out of the combat system was garbage and it didn’t make any difference. However, we did upset a few stakeholders along the way. So beware, people don’t like discovering that a system that they thought was “tickety-boo” was not as good as they thought.

Example #5

The second goal was to show that the system of systems is safe in operation. So, we looked at the actual performance. We looked at test results of the radar and then also we were very fortunate that trials of the radar on the ship with aircraft were carried out and we were able to look at those trials reports. And once again, it emerged that the system in the real world wasn’t operating quite as intended, or quite as people had assumed that it would. It wasn’t performing as well. So, that was an issue. I can’t say any more about that but these things happen.

Also, a big part of the project was we included the human element. So, as I’ve said before, we had pilots and we had radar air traffic talk-down operators. So, we brought in some human factors specialists. They captured the procedures and tasks that the pilots and the radar operators had to perform. They captured them with what’s called a Hierarchical Task Analysis, they did some analysis of the tasks and what could go wrong. Then they created a model of what the humans were doing and ran it through a simulation several thousand times. So in that way, they did some performance modelling.

Now, they couldn’t give us an absolute figure on workload or anything like that but what they could do – fortunately, our new system was replacing an older system which was even more informally cobbled together than the one that we were we were bringing in. And so, the Human Factor specialists were able to compare human performance in the old system vs. human performance with the new system. Very fortunately, we were pleased to find out that the predicted performance was far better with the new system. The new system was much easier to operate for both the pilots and the talk-down radar operators. So, that was terrific.

Example #6

So, the third one; All system of systems safety requirements are satisfied. Now, this is a bit more nebulous, this goal, but what it really came down to was when you put things together, very often you get what’s called emergent behaviour. As in things start to happen that you didn’t expect or you didn’t predict based on the individual pieces. It’s the saying, two plus two equals five. You get more out of a system – you get synergy for good or ill out when you start putting different things together.

So, does the whole thing actually work? And broadly speaking, the answer was yes, it works very well. There were some issues, a good example the old radar that they used to use to talk the planes down was a search radar so the operator could see other traffic apart from the plane they were they were guiding in. Now, the operator being able to see other things is both good and bad because on the one hand gives them improved situational awareness so they can warn off traffic if it’s a collision situation develops. But also, it’s bad because it’s a distraction for the operator. So, it could have gone either way.

So, the new radar was specialized. It focused only on the aircraft being talked down. So, the operator was blind to other traffic. So that was great in terms of decreasing operator workload and ultimately pilot workload as well. But would this increase the collision risk with other traffic? And I’ll talk about that in the summary briefly.

Example #7

And then our final goal is to show that all interactions are safe between the guidance system, the aircraft and the ship. This was a non-trivial exercise because ships have large numbers of electronic systems and there’s a very involved process to go through to check that a new piece of kit doesn’t interfere with anything else or vice versa.

And also, of course, does the new electronic system/the new radar does the radiation effect ship? Because you’ve got weapons on the ship and some of those explosive devices that the weapons uses are electrically initiated. So, could the radiation set off an explosion? So, all of those things had to be checked. And that’s a very specialized area.

And then we’ve got, does the system interfere with the aircraft and the aircraft with the system? What about the integration of the ship and the aircraft and the aircraft to the ship? Yet another specialized area where there’s a particular way of doing things. And of course, the aircraft people want to protect the aircraft and the ship people want to protect the ship. So, getting those two to marry up is also another one of those non-trivial exercises I keep referring to but it all worked out in the end.

Summary

Points to note: When we’re doing system of systems – I’ve got five points here, you can probably work some more points out from what I’ve said for yourself – but we’re putting together disparate systems. They’re different systems. They’ve been procured by different organizations, possibly, to do different things. The stakeholders who bought them and care about them have got different aims and objectives. They’ve got different agendas to each other. So, getting everyone to play nicely in the zoo can be challenging. And even with somebody pulling it all together at the top to say “This has got to work. Get with the program, folks!” there’s still some friction.

Particularly, you end up with large numbers of stakeholders. For example, we would have regular safety meetings, but I don’t think we ever had two meetings in a row with exactly the same attendees because with a large group of people, people are always changing over and things move up. And that can be a challenge in itself. We need to include the human in the loop in systems of systems because typically that’s how we get them all to play together. We rely on human beings to do a lot of translation work and in effect. So, how do the systems cope?

A classic mistake really with systems design is to design a difficult-to-operate system and then just expect the operator to cope. That can be from things as seemingly trivial as amusement park rides – I did a lesson on learning lessons from an amusement park ride accident only a month or two ago and even there it was a very complex system for two operators, neither of whom had total authority over the system or to be honest, really had the full picture of what was going on. As a result, there were several dead bodies. So, how did the operators cope, and have we done enough to support them? That’s a big issue with a system of systems.

Thirdly, this is always true with safety analysis, but especially so with system of systems. The real-world performance is important. You can do all the analysis in the world making certain assumptions and the analysis can look fine, but in the real world, it’s not so simple. We have to do analysis that assumes the kit works as advertised because you’ve got nothing else to go on until you get the test results and you don’t get them until towards the end of the program. So, you’re going down a path, assuming that things work, that they do what they say on the tin, and perhaps you then discover they don’t do what they say on the tin. Or they don’t do everything they say on a tin. Or they do what they say and they do some other things that you weren’t expecting as well and then you’ve got to deal with those issues.

And then fourthly, somewhat related to what I’ve just talked about, but you put systems together in an informal way, perhaps, and then you discover how they actually get on – what really happens. In reality, once you get above a certain level of complexity, you’re not really going to discover all the emergent behaviours and consequences until you get things into service and it’s clocked up a bit of time in service under different conditions in the real world. In fact, that was the case with this and I think with a system of systems, you’ve just got to assume that it’s sufficiently complex that that is the case.

Now, that’s not an unsolvable problem but, of course, how do you contract for that? Where you’ve got your contractors wanting you to accept their kit and pay them at a certain date or a certain point in the program, but you’re not going to find out whether it all truly works until it’s got into service and been in service for a while. So, how do you incentivize the contractor to do a good job or indeed to correct defects in a timely manner? That’s quite a challenge for system systems and it’s something that needs thinking about upfront.

And then finally, I’ve said, remember the bigger picture. It’s very easy when you’re doing analysis and you’ve made certain assumptions and you set the scope, it’s very easy to get fixated on that scope and on those assumptions and forget the real world is out there and is unpredictable. We had lots of examples of that on this program. We had the ship’s comms that didn’t always work properly, we couldn’t rely on the combat system, the radar in the real world didn’t operate as well as it said in the spec, etc, etc. There were lots of these things.

And, one example I mentioned was that with the new radar, the radar operator does not see any traffic other than the aircraft that is being guided in. So, there’s a loss of situational awareness there and there’s a risk, maybe an increased risk, of collision with other traffic. And that actually led to a disagreement in our team because some people who had got quite fixated on the analysis and didn’t like the suggestion that maybe they’d missed something. Although it was never put in those terms, that’s the way they took it. So, we need to be careful of egos. We might think we’ve done a fantastic analysis and we’ve produced hundreds of pages of data and fault trees or whatever it might be but that doesn’t mean that our analysis has captured everything or that it’s completely captured what goes on in the real world because that’s very difficult to do with such a complex system of systems.

So, we need to be aware of the bigger picture, even if it’s only just qualitatively. Somebody, a little voice, piping up somewhere saying, “What about this? And we thought about that? I know we’re ignoring this because we’ve been told to but is that the right thing to do?” And sometimes it’s good to be reminded of those things and we need to remember the big picture.

Copyright Statement

Anyway, I’ve talked for long enough. It just remains for me to point out that all the text in quotations, in italics, is from the military standard, which is copyright free but this presentation is copyright of the Safety Artisan. As I’m recording this, it’s the 5th of September 2020.

For More …

And so if you want more, please do subscribe to the Safety Artisan channel on YouTube and you can see the link there, but just search for Safety Artisan in YouTube and you’ll find us. So, subscribe there to get free video lessons and also free previews of paid content. And then for all lessons, both paid and free, and other resources on safety topics please visit the Safety Artisan at www.safetyartisan.com/  where I hope you’ll find much more good stuff that you find helpful and enjoyable.

End: System of Systems Hazard Analysis

So, that is the end of the presentation and it just remains for me to say thanks very much for watching and listening. It’s been good to spend some time with you and I look forward to talking to you next time about environmental analysis, which is Task 210 in the military standard. That’ll be next month, but until then, goodbye.

Categories
Mil-Std-882E Safety Analysis

Health Hazard Analysis

In this full-length (55-minute) session, The Safety Artisan looks at Health Hazard Analysis, or HHA, which is Task 207 in Mil-Std-882E. We explore the aim, description, and contracting requirements of this complex Task, which covers: physical, chemical & biological hazards; Hazardous Materials (HAZMAT); ergonomics, aka Human Factors; the Operational Environment; and non/ionizing radiation. We outline how to implement Task 207 in compliance with Australian WHS. (We refer to other lessons for specific tools and techniques, such as Human Factors analysis methods.)

This is the seven-minute-long demo. The full version is a 55-minute-long whopper!

Health Hazard Analysis: Topics

  • Task 207 Purpose;
  • Task Description;
  • ‘A Health Hazard is…’;
  • ‘HHA Shall provide Information…’;
  • HAZMAT;
  • Ergonomics;
  • Operating Environment;
  • Radiation; and
  • Commentary.

Health Hazard Analysis: Transcript

Click here for the Transcript

Introduction

Hello, everyone, and welcome to the Safety Artisan. I’m Simon, your host, and today we are going to be talking about health hazard analysis.

Task 207: Health Hazard Analysis

This is task 207 in the Mil. standard, 882E approach, which is targeted for defense systems, but you will see it used elsewhere. The principles that we’re going to talk about today are widely applicable. So, you could use this standard for other things if you wish.

Topics for this Session

We’ve got a big session today so I’m going to plough straight on. We’re going to cover the purpose of the task; the description; the task helpfully defines what a health hazard is; says what health hazard analysis, or HHA, shall provide in terms of information. We talk about three specialist subjects: Hazardous materials or hazmat, ergonomics, and operating environment. Also, radiation is covered, another specialist area. Then we’ll have some commentary from myself.

Now the requirements of the standard of this task are so extensive that for the first time I won’t be quoting all of them, word for word. I’ve actually had to chop out some material, but I’ll explain that when we come to it. We can work with that but it is quite a demanding task, as we’ll see.

Task Purpose

Let’s look at the task purpose. We are to perform and document a health hazard analysis and to identify human health hazards and evaluate what it says, materials and processes using materials, etc, that might cause harm to people, and to propose measures to eliminate the hazards or reduce the associated risks. In many respects, it’s a standard 882 type approach. We’re going to do all the usual things. However, as we shall see it, we’re going to do quite a lot more on this one.

Task Description #1

So, task description. We need to evaluate the potential effects resulting from exposure to hazards, and this is something I will come back to again and again. It’s very easy dealing in this area, particularly with hazardous materials, to get hung up on every little tiny amount of potentially hazardous material that is in the system or in a particular environment and I’ve seen this done to death so many times. I’ve seen it overdone in the UK when COSHH, a control of substance hazardous to health, came in in the military. We went bonkers about this. We did risk assessments up the ying-yang for stuff that we just did not need to worry about. Stuff that was in every office up and down the land. So, we need to be sensible about doing this, and I’ll keep coming back to that.

So, we need to do as it says; identification assessment, characterisation, control, and communicate assets in the workplace environment. And we need to follow a systems approach, considering “What’s the total impact of all these potential stressors on the human operator or maintainer?” Again, I come from a maintenance background. The operator often gets lots of attention because a) because if the operator stuffs up, you very often end up with a very nasty accident where lots of people get hurt. So, that’s a legitimate focus for a human operator of a system.

But also, a lot of organizations, the executive management tend to be operators because that’s how the organization evolves. So, sometimes you can have an emphasis on operations and maintenance and support, and other things get ignored because they’re not sexy enough to the senior management. That’s a bad reason for not looking at stuff. We need to think about the big picture, not just the people who are in control.

Task Description #2

Moving on with task description. We need to do all of this good stuff and we’re thinking about materials and components and so forth, and if they cause or contribute to adverse effects in organisms or offspring. We’re talking about genetic effects as well. Or pose a substantial present or future danger to the environment. So in 882, we are talking about environmental impact as well as human health impact. There is a there is an environmental task as well that is explicitly so.

Personally, I would tend to keep the human impact and the environmental impact separate because there are very often different laws that apply to the two. If you try and mix them together or do a sort of one size fits all analysis, you’ll frequently make life more difficult for yourself than you need to. So, I would tend to keep them separate. However, that’s not quite how the standard is written.

A Health Hazard is …

So what is a health hazard? As it says, a health hazard is a condition and it’s got to be inherent to the operation, etc, through to disposal of the system. So, it’s cradle to grave – That’s important. That’s consistent with a lot of Western law. It’s got to be capable of causing death, injury, illness, disability, or even in this standard, they’ve just reduced the job performance of personnel by exposure to physiological stresses.

Now I’m getting ahead of myself because, in Australia, health hazards can include psychological impacts as well, not just impacts on physical health. Now reduced job performance? – Are we really interested in minor stuff? Maybe not. Maybe we need to define what we mean by that. Particularly when it comes to operators or maintainers making mistakes, perhaps through fatigue that can have very serious consequences.

So, this analysis task is going to address lots of causes or factors that we typically find in big accidents and relate them to effects on human performance. Then it goes on to specify that certain specific hazards must be included chemical, physical, biological, ergonomic – for ergonomic, I would say human factors, because when you look at the standard, what we call ergonomics is much wider than the narrow definition of ergonomics that I’m used to.

Now, this is the first area that chops some material because where in a-d it says e.g. in those examples there is in effect a checklist of chemical, physical, biological, and ergonomic hazards that you need to look at. This task has its own checklist. You might recall when we talked about preliminary hazard identification, a hazard checklist is a very good method for getting broad coverage in general. Now, in this task, we have further checklists that are specific to human health. That’s something to note.

We’ve also got to think about hazardous materials that may be formed by test, operation, maintenance, disposal, or recycling. That’s very important, we’ll come back to that later. Thinking about crashworthiness and survivability issues. We’ve got to also think about it says non-ionizing radiation hazards, but in reality, we’ve got to consider ionizing as well. If we have any radioactive elements in our system and it does say that in G. So, we’ve got to do both non-ionizing and ionizing.

HHA Shall Provide Info #1

What categories of information should this health hazard analysis generate? Well, first of all, it’s got to identify hazards and as I’ve said or hinted at before, we’ve got to think about how could human beings be exposed? What is the pathway, or the conditions, or mode of operations by which a hazardous agent could come into contact with a person? I will focus on people. So, just because there is a potentially hazardous chemical present doesn’t mean that someone’s going to get hurt. I suspect if I looked around in the computer in front of me that I’m recording this on or at the objects on my desk, there are lots of materials that if I was to eat them or swallow them or ingest them in some other way would probably not do me a lot of good. But it’s highly unlikely that I’m going to start eating them so maybe we don’t need to worry about that.

HHA Shall Provide Info #2

We also need to think about the characterization of the exposure. Describing the assessment process: names of the tools or any models used; how did we estimate intensities of energy or substances at the concentrations and so on and so forth? This is one of those analyses that are particularly sensitive to the way we go about doing stuff. Indeed, in lots of jurisdictions, you will be directed as to how you should do some of these analyses and we’ll talk about that in the commentary later. So, we’ve got to include that. We’ve got to “show our working” as our teachers used to tell us when preparing us for exams.

HHA Shall Provide Info #3

We’ve got to think about severity and probability. Here the task directs us to use the standard definition tables that are found in 882. I talked about those under task 202 so I’m not going to talk about further here. Now, of course, we can, and maybe should tailor these matrices. Again, I’ve talked about that elsewhere, but if we’re not using the standard matrices and tables, then we should set out what we’ve done and why that’s appropriate as well.

HHA Shall Provide Info #4

Then finally, the mitigation strategy. We shouldn’t be doing analysis for the sake of analysis. We should be doing to say, “How can we make things better?” And in particular for health, “How can we make things acceptable?” Because health hazards very often attract absolute limits on exposure. So, questions of SFARP or ALARP or cost-benefit analysis simply may not enter into the equation. We simply may be direct to say “This is the upper limit of what you can expose a human being to. This is not negotiable.” So, that’s another important difference with this task.

Three More Topics

Now, at this point, I am just foreshadowing. We’re about to move on to talk about some different topics. First of all, in this section, we’re going to talk about three particular topics. Hazardous material or HAZMAT for short; ergonomics; and the operational environment. When we say the operational environment, it’s mainly about the people, aspects of the system, and the environment that they experience. Then after these three, we would go on to talk about radiation. There are special requirements in these three areas for HAZMAT, ergonomics, and operational environment.

HAZMAT (T207) #1

First of all, we have to deal with HAZMAT. If it’s going to appear in our system, or in the support system, we’ve got to identify the HAZMAT and characterize it. There are lots of international and national standards about how this is to be done. There’s a UN convention on hazardous materials, which most countries follow. And then there will usually be national standards as well that direct what we shall do. More on that later. So, we’ve got to think about the HAZMAT.

A word of caution on that. Certainly in Australian defence, we do HAZMAT to death because of a recent historical example of a big national scandal about people being exposed to hazardous materials while doing defence work. So, the Australian Defence Department is ultrasensitive about HAZMAT and will almost certainly mandate very onerous requirements on performing this. And whilst we might look at that go “This is nuts! This is totally over the top!” Unfortunately, we just have to get on with it because no one is going to make, I’m afraid, a sensible decision about the level of risk that we don’t have to worry about because it’s just too sensitive a topic.

So, this is one of those areas were learning from experience has actually gone a bit wrong and we now find ourselves doing far too much work looking at tiny risks. Possibly at the expense of looking at the big picture. That’s just something to bear in mind.

HAZMAT (T207) #2

So, lots of requirements for HAZMAT. In particular, we need to think about what are we going to do with it when it comes to disposal? Either disposal of consumables, worn components or final disposal of the system. And very often, the hazardous material may have become more hazardous. In that, let’s say engine or lubricating oil will probably have metal fragments in it once it’s been used and other chemical contamination, which may render it carcinogenic. So, very often we start with a material that is relatively harmless, but use – particularly over a long period of time – can alter those chemicals or introduce contaminants and make them more dangerous. So, we need to think about the full life of the system.

Ergonomics (T207) #1

Moving on to ergonomics, and this is another big topic. Now, Mil.standard 882 doesn’t address human factors, in my view, particularly well. The human factors stuff gets buried in various tasks and we don’t identify a separate human factors program with all of the interconnections that you need in order to make it fully effective. But this is one task where human factors do come in, very much so, but they are called ergonomics rather than human factors. Under this task description, we need to think about mission scenarios. We need to think about the staff who will be exposed as operators or maintainers, whatever they might be doing. We’ve got to start to characterize the population at risk.

Ergonomics (T207) #2

We’ve got to think about the physical properties of things that personnel will handle or wear and the implications that has on body weight. So, for example, there is a saying that the “Air Force and the Navy man their equipment and the army equip their men”. Apologies for the gendered language but that’s the saying. So, we’re putting human beings – very often – inside ships and planes and tanks and trucks. And we’re also asking soldiers to carry – very often – lots of heavy equipment. Their rations, their weapons, their ammunition, water, various tools and stuff that they need to survive and fight on the battlefield. And all that stuff weighs and all of that stuff, if you’re running about carrying it, bangs into the body and can hurt people. So, we need to address that stuff.

Secondly, we need to look at physical and cognitive actions that operators will take. So, this is really very broad once we get into the cognitive arena thinking about what are the operators going to be doing. And exposures to mechanical stress while performing work. So, maybe more of a focus on the maintainer in part three. Now, for all of this stuff, we need to identify characteristics of the design of the system or the design of the work that could degrade performance or increase the likelihood of erroneous action that could result in mishaps or accidents.

This is classic human factor’s stuff. How might the designed work or the designed equipment induce human error? So, that’s a huge area of study for a lot of systems and very important. And this will be typically a very large contributor to serious accidents and, in fact, accidents of all kinds. So, it should be an area of great focus. Often it is not. We just tend to focus on the so-called technical risks and overdo that while ignoring the human in the system. Or just assuming that the human will cope, which is worse.

Ergonomics (T207) #3

Continuing with ergonomics. How many staff do we need to operate and maintain the system and what demands are we placing on them? Also, if we overdo these demands, what are we going to do about that? Now, this can be a big problem in certain systems. I come from an aviation background and fatigue and crew duty time tend to be very heavily policed in aviation. But I was actually quite shocked when I sort of began looking at naval surface ships, submarines, where it seemed that fatigue and crew duty time was not well policed. In fact, there even seemed to be, in some places, quite a macho attitude to forcing the crew into working long hours. I say macho attitude because the feeling seemed to be “Well if you can’t take it, you shouldn’t have joined.”

So, it seems to be to me, quite a negative culture in those areas potentially, and it’s something that we need to think about. In particular, I’ve noticed on certain projects that you have a large crew who seem to be doing an extraordinary amount of work and becoming very fatigued. That’s concerning because, of course, you could end up with a level of fatigue where the crew might as well – they’re making mistakes to the same level as a drunk driver. So, this is something that needs to be considered carefully and given the attention it deserves.

Operating Environment #1

Moving on to the operating environment. How will these systems be used and maintained? And what does that imply for human exposure? This is another opportunity where we need to learn from legacy systems and go back and look at historical material and say “What are people being exposed to in the past? And what could happen again?”

Now, that’s important. It’s often not very systematically done. We might go and talk to a few old, bold operators and maintainers and ask their advice on the things that can go wrong but we don’t always do it very systematically. We don’t always survey past hazard and accident data in order to learn from it. Or if we do there is sometimes a tendency to say, “That happened in the past, but we will never make those mistakes. We’re far too clever to stuff up like that – like our predecessors did.” Forgetting that our predecessors were just as clever as we are and just as well –meaning as we are but they were human and so are we.

I think pride can get in the way of a lot of these analyses as well. And there may be occasions where we’re getting close to exposure limits, where regulations say we simply cannot expose people to a certain level of noise, or whatever, and then “How are we going to deal with that? How are we going to prevent people from being overexposed?” Again, this can be a problem area.

Operating Environment #2

This next bit of operating environment is really – I said about putting people in the equipment. Well, this is this bit. This is part A and B. So, we’re thinking about “If we stick people in a vehicle – whether it be a land vehicle, marine vehicle, an air vehicle, whatever it might be – what is that vehicle going to do to their bodies?” In terms of noise, of vibration and stresses like G forces, for example, and shock, shock loading? Could we expose them to blast overpressure or some other sudden changes of pressure or noise that’s going to damage their ears, temporarily or permanently? Again, remarkably easy to do. So, that’s that aspect.

Operating Environment #3

Moving on, we continue to talk about noise and vibration in general. In this particular standard, we’ve got some quite stringent guidance on what needs to be looked at. Now, these requirements, of course, are assuming a particular way of doing things, which we will come to later. There are a lot of standards reference by task 207. This task is assuming that we’re going to do things the American government or the American military way, which may not be appropriate for what we’re doing or the jurisdiction we’re in. So, we’ll just move on.

Operating Environment #4

Then again, talking about noise, blast, vibration, how are we going to do it? Some quite specific requirements in here. And again, you’ll notice, two-thirds of the way down in the paragraph, I’ve had to chop out some examples. There is some more in effect, hazard checklists in here saying we must consider X, Y, Z. Now, again, this seems to be requiring a particular way of doing things that may not be appropriate in a non-American defence environment.

However, the principle I think, to take away from this is that this is a very demanding task. If we consider human health effects properly, it’s going to require a lot of work by some very specialist and skilled people. In fact, we may even get in some specialist medical people. If you work in aviation or medicine, you may be aware that there is a specialist branch of medicine for called aviation medicine where these things are specifically considered. And similarly, there are medical specialists are a diving operations and other things where we expose human beings to strange effects. So, this can be a very, very demanding task to follow.

Operating Environment #5

So, when we’re going to equip people with protective equipment or we’re going to make engineering changes to the system to protect them, how effective are these things going to be? And given that most of these things have a finite effectiveness – they’re rarely perfect unless you can take the human out of the system entirely, then we’re going to be exposing people to some level of hazard and there will be some risk that that might cause that injury.

So, how many individuals are we going to expose per platform or over the total population exposed over the life of the system? Now, bearing in mind we’re talking sometimes about very large military systems that are in service for decades. This can be thousands and thousands of people. So, we may need to think about that and certainly in Australia, if we expose people to certain potential contaminants and noise, we may have to run a monitoring program to monitor the health and exposure of some of this exposed population or all of them. So, that can be a major task and we would need to identify the requirements to do that quite early on, hopefully.

And then, of course, again, we’re not doing this for the sake of it. How can we optimize the design and effectively reduce noise exposure and vibration exposure to humans? And how did we calculate it? How did we come to those conclusions? Because we’re going to have to keep those records for a long, long time. So, again, very demanding recording requirements for this task.

Operating Environment #6

And then I think this is the final one on operating environment. What are the limitations of this protective equipment and what burden do they impose? Because, of course, if we load people up with protective equipment that may introduce further hazards. Maybe we’re making the individual more likely to suffer a muscular musculoskeletal disorder.

Or maybe we are making them less agile or reducing their sensitivity to noise? Maybe if we give people hearing protection, if somebody else has assumed that they will hear a hazard coming, well, they’re not going to anymore, are they? If they’re wearing lots of protective equipment, they may not be as aware of the environment around them as they once were. So, we can introduce secondary hazards with some of this stuff. And then we need to look at the trade-offs. When and where? Is it better to equip people or not to equip people and limit their exposure or just keep them away altogether?

Radiation (T207)

So moving on briefly, we’re just going to talk about radiation. Now in this task – again, I’ve had to chop a lot of stuff out – you’ll see that in square brackets this task refers to certain US standards for radiation. Both ionizing and non-ionizing, lasers and so forth. That’s appropriate for the original domain, which this standard was targeted at. It may be wholly inappropriate for what you and I are doing.

So, we need to look at the principles of this task, but we may need to tailor the task substantially in order to make it appropriate for the jurisdiction we’re working in. Again, we’re going to have to keep these records for a long time. Radiation is always going to be dreaded by humans so it’s a controversial topic. We’re going to have to monitor people’s exposure and protect them and show that we have done so, potentially decades into the future. So, we should be looking for the very highest standards of documentation and recording in these areas because they will come under scrutiny.

Contracting #1

Moving onto contracting, this is more of a standard part of this task or part of the standard, I should say. These words or very similar words exist in every task. So, I’m not going to go through all of these things in any great detail. It’s worth noting, and I’ll come back to this in part B, we may need to direct whoever is doing the analyses to consider or exclude certain areas because it’s quite possible to fritter away a lot of resources doing either a wide but shallow analysis that fails to get to the things that can really hurt people.

So, we might be doing a superficial analysis or we might go overboard on a particular area and I’ve mentioned HAZMAT but there are many things that people can get overexcited about. So, we might see people spending a lot of time and effort and money in a particular area and ignoring others that can still hurt people. Even though they might be mundane, not as sexy. Maybe the analysts don’t understand them or don’t want to know. So, the customer who is paying for this may need to direct the analysis. I will come on to how you do that later.

Also the customer or client may need to specify certain sources of information, certain standards, certain exposure standards, certain assumptions, certain historical sets of data and statistics to be used. Or some statistics about the population, because, of course, for example, the military systems, the people who operate military systems tend to be quite a narrow subset of the population. So, there are very often age limits. Frontline infantry soldiers tend to be young and fit. In certain professions, you may not be allowed to work if you are colour-blind or have certain disabilities. So, it may be that a broad analysis of the general population is not appropriate for certain tasks.

It may be perfectly reasonable to assume certain things about the target population. So, we need to think about all of these things and ensure that we don’t have an unfocused analysis that as a result is ineffective or wastes a lot of money looking at things that don’t really matter, that are irrelevant.

Contracting #2

Standards and criteria. In part F, there are 29 references which the standard lists, which are all US military standards or US legal standards. Now, probably a lot of those will be inappropriate for a lot of jurisdictions and a lot of applications. So, there’s going to be quite a lot of work there to identify what are the appropriate and mandatory references and standards to use. And as I said, in the health hazard area, there are often a lot. So, we will often be quite tightly constrained on what to do.

And Part H, if the customer knows or has some idea of the staff numbers and profile, they’re going to be exposed to this system of operating and maintaining the system. That’s a very useful information and needs to be shared. We don’t want to make the analyst, the contractor, guess. We want them to use appropriate information. So, tell them and make sure you’ve done your homework, that you tell them the right thing to do.

Commentary #1

So, that’s all of the standard. I’ve got four slides now of commentary. And the first one, I just want to really summarize what we’ve talked about and think about the complexity of what we’re being asked to do. First bullet point, we are considering cradle to grave operation and maintenance and disposal. Everything associated with, potentially, quite a complex system. Now, this lines up very nicely with the requirements of Australian law, which require us to do all of this stuff. So, it’s got to be comprehensive.

Second bullet point, we’ve got to think about a lot of things. Death and injury, illness, disability, the effects on and could we infect somebody or contaminate somebody with something that will cause birth defects in their offspring? There’s a wide range of potential vectors of harm that we’re talking about here, and we will probably – for some systems, we will need to bring in some very specialist knowledge in order to do this effectively. And also thinking about reduced job performance – this is one aspect of human factors. This task is going to linking very strongly to whatever human factors program we might.

Thirdly, we’ve got to think about chemical, physical, and biological hazards. So, again, there’s a wide range of stuff to think about there. An example of that is hazmat and the requirements on hazmat are, in most jurisdictions, tend to be very stringent. So, that is going to be done and we need to be prepared to do a thorough job and demonstrate that we’ve done a thorough job and provide all the evidence.

Then we’ve also got ergonomics. Actually, strictly speaking, we’re talking human factors here because it’s a much wider definition than what the definition of ergonomics that I’m used to, which tends to be purely physical effects on a human. Because we’re talking about cognitive and perception and job performance as well and also we’ve got vibration and acoustics. So, again, particular medical effects and stringent requirements. So, a whole heap of other specialists work there.

And operating environment, thinking about the humans that will be exposed. How are we going to manage that? What do we need to specify in order to set up whatever medical monitoring program of the workforce we might have to bring in in the future through life? So, again, potentially a very big, expensive program. We need to plan that properly.

Then finally, radiation. Another controversial topic which gets lots of attention. Very stringent requirements, both in terms of exposure levels and indeed we will often be directed as to how we are to calculate and estimate stuff. It’s another specialist area and it has to be done properly and thoroughly.

Overall, every one of those seven bullet points shows how complex and how comprehensive a good health hazard analysis needs to be. So, to specify this well, to understand what is required and what is needed through life, for the program to meet our legal and regulatory obligations, this is a big task and it needs a lot of attention and potentially a lot of different specialist knowledge to make it work. I flogged that one to death, so I’ll move on.

Commentary #2

Now, as I’ve said before, too, this is an American military standard, so it’s been written to conform to that world. Now in Australia, the requirements of Australian work, health and safety are quite different to the American way of doing things. Whilst we tend to buy a lot of American equipment and there’s a lot of American-style thinking in our military and in our defence industry, actually, Australian law much is much more closely linked to English law. It’s a different legal basis to what the Americans do. So Australian practitioners take note.

It’s very easy to go down the path of following this standard and doing something that will not really meet Australian requirements. It’ll be, “We’ll do some work” and it may be very good work, but when we come to the end and we have to demonstrate compliance with Australian requirements, if we haven’t thought about and explicitly upfront, we’re probably in for a nasty shock and a lot of expensive rework that will delay the program. And that means we’re going to become very, very unpopular very quickly. So, that’s one to avoid in my experience.

So, we will need to tailor task 207 requirements upfront in order to achieve WHS compliance. And the client customer needs to do that and understand that not the – well the contractor needs to. The analysts need to understand that. But the customer needs to understand that first, otherwise, it won’t happen.

Commentary #3

Let’s talk a bit more about tailoring for WHS. For example, there are several WHS codes of practice which are relevant. And just to let you know, these codes of practice cover not only requirements of what you have to achieve, but also, to a degree, how you are to achieve them. So, they mandate certain approaches. They mandate certain exposure standards. Some of them also list a lot of other standards that are not mandated but are useful and informative.

So, we’ve got codes of practice on hazardous manual tasks so avoiding muscular-skeletal injuries. We’ve got several codes of practice on hazardous chemicals. So, we’ve got a COP specifically on risk management and risk assessment of hazardous chemicals, on safety data sheets, on labelling of HAZCHEM in a workplace. We’ve got a COP on noise and hearing loss and also, we have other COPs on specific risks, such as asbestos, electricity and others, depending on what you’re doing. So, potentially there is a lot of regulation and codes of practice that we need to follow.

And remember that COPs are, while they contain regulations, they also are a standard that a court will look to enforce if you get prosecuted. If you wind up in court, the prosecution will be asking questions to determine whether you’ve met the requirements of COP or not. If you can’t demonstrate that you’ve met them, you might have done a whole heap of work and you might be the greatest expert in the world on a certain kind of risk, but if he can’t demonstrate that you’ve met at minimum the requirements of COP – because they are minimum requirements – then you’re going to be in trouble. So, you need to be aware of what those things are.

Then on radiation, we have separate laws outside the WHS. So, we have the Australian Radiation Protection and Nuclear Safety Agency, ARPANSA, and there is an associated act and associated regulations and some COP as well. So, for radiation side, there’s a whole other world that you’ve got to be aware of and associated with all of this stuff are exposure standards.

Commentary #4

Finally, how do we do all of this without spending every dollar in the defence budget and taking 100 years to do it? Well, first of all, we need to set our scope and priorities. So, before we get to Task 207, the client/the customer should be involving end-users and doing a preliminary hazard identification exercise. That should be broad and as thorough as possible. They should also be doing a preliminary hazardous hazard analysis exercise, Task 202, to think about those hazards and risks further.

Also, you should be doing Task 203, which is system requirements hazard analysis. We need to be thinking about what are the applicable requirements for my system from the law all the way down to what specific standards? What codes of practice? What historical norms do we expect for this type of equipment? Maybe there is industry good practice on the way things are done. Maybe as we work through the specifications for the equipment, we will derive further requirements for hazard controls or a safety management system or whatever it might be. That’s a big job in itself.

So, we need to do all three of those tasks, 201, 202, 203, in order to be prepared and ready to focus on those things that we think might hurt us. Might hurt people physically, but also might hurt us in terms of the amount of effort we’re going to have to make in order to demonstrate compliance and assurance. So, that will focus our efforts.

Secondly, when we need to do the specialist analyses and we may not always need to do so. This is where 201, 202, and 203 come in. But where we need to do specialist analyses, we may need to find specialist staff who are competent to do these this kind of unusual or specialist work and do it well. Now, typically, these people are not cheap, and they tend to be in short supply. So, if you can think about this early and engage people early, then you’re going to get better support.

You’re probably going to get a better deal because in my experience if you call in the experts and ask their opinion early on, they’re more likely to come back and help you later. As opposed to, if you ignore them or disregard their advice and then ask them for help because you’re in trouble, they may just ignore you because they’ve got so much work on. They don’t need your work. They don’t need you as a client. You may find yourself high and dry without the specialists you need or you may find yourself paying through the nose to get them because you’re not a priority in their eyes. So do think about this stuff early, I would suggest and do cultivate the specialist. If you get them in early and listen to them and they feel involved, you’re much more likely to get a good service out of them.

So thirdly, try not to do huge amounts of work on stuff that doesn’t really have a credible impact on health. Now, I know that sounds like a statement of the blinking obvious, but because people get so het up about health issues, particularly things like radiation and other hazards that humans can’t see: we dread them. We get very emotional about this stuff and therefore, management tends to get very, very worried about this stuff. And I’ve seen lots of programs spend literally millions of dollars analyzing stuff to death, which really doesn’t make any difference to the safety of people in the real world. Now, obviously, that’s wasted money, but also it diverts attention from those areas that really are going to cause or could cause harm to people through the life of the system.

So, we need to use that risk matrix to understand what is the real level of risk exposure to human beings and therefore, how much money should we be spending? How much effort and priority should we be spending on analyzing this stuff? If the risk is genuinely very low, then probably we just take some standard precautions, follow industry best practice, and leave it at that and we keep our pennies for where they can really make a difference.

Now, having said that, there are some exceptions. We do need to think about accident survivability. So, what stresses are people going to be exposed to if their vehicle is an accident? How do we protect them? How do they escape afterward? Hopefully. How do we get them to safety and treat the injured? And so on and so forth. That may be a very significant thing for your system.

Also post-accident scenarios in terms of – very often a lot of hazardous materials are safely locked away inside components and systems but if the system catches fire or is smashed to pieces and then catches fire, then potentially a lot of that HAZMAT is going to become exposed. Very often materials that pose a very low level of risk, if you set them on fire and then you look at the toxic residue left behind after the fire, it becomes far more serious. So, that is something to consider. What do we do after we’ve had an accident and we need to sort of clean up the site afterward? And so on and so forth.

Again, this tends to be a very specialist job so maybe we need to get in some specialists to give us advice on that. Or we need to look to some standards if it’s a commonplace thing in our industry, as it often is. We learn we learned from bitter experience. Well, hopefully, we learn from bitter experience.

Copyright Statement

So, that’s it from me. I appreciate it’s been a long session, but this is a very complex task and I’ve really only skimmed the surface on this and pointed you at sort of further reading and maybe some principles to look at in more depth. So, all the quotations are from the Mill standard, which is copyright free. But this presentation is copyright of the Safety Artisan.

For More…

And for more information on this topic and others, and for more resources, do please visit www.safetyartisan.com. There are lots of free resources on the website as well, and there’s plenty of free videos to look at.

End: Health Hazard Analysis

So, that is the end of the session. Thank you very much for listening. And all that remains for me to say is thanks very much for supporting the work of the Safety Artisan and tuning into this video. And I wish you every success in your work now and in the future. Goodbye.

Categories
Mil-Std-882E Safety Analysis

Operating & Support Hazard Analysis

In this full-length session, The Safety Artisan looks at Operating & Support Hazard Analysis, or O&SHA, which is Task 206 in Mil-Std-882E. We explore Task 206’s aim, description, scope, and contracting requirements. We also provide value-adding commentary, which explains O&SHA: how to use it with other tasks; how to apply it effectively on different products; and some of the pitfalls to avoid. We refer to other lessons for specific tools and techniques, such as Human Factors analysis methods.

This is the seven-minute-long demo. The full version is about 35 minutes long.

Operating & Support Hazard Analysis: Topics

  • Task 206 Purpose:
    • To identify and assess hazards introduced by O&S activities and procedures;
    • To evaluate the adequacy of O&S procedures, facilities, processes, and equipment used to mitigate risks associated with identified hazards.
  • Task Description (six slides);
  • Reporting (two slides);
  • Contracting (two slides); and
  • Commentary (four slides).

Operating & Support Hazard Analysis: Transcript

Click here for the Transcript

Introduction

Hello everyone and welcome to the Safety Artisan; home of safety engineering training. I’m Simon and today we’re going to be carrying on with our series on Mil. Standard 882E system safety engineering.

Operating & Support Hazard Analysis

Today, we’re going to be moving on to the subject of operating and support hazard analysis. This is, as it says, task 206 under the standard. Operating and support hazard analysis, I’ll just call it O&S or OSHA (also O&SHA) for short. Unfortunately, that will confuse people if I call OSHA. Let’s call it O&S.

Topics for this Session

The purpose of O&S hazard analysis is to identify and assess hazards introduced by those activities and procedures and also to evaluate the adequacy of O&S procedures, processes, equipment, facilities, etc, to mitigate risks that have been already identified. A twofold task but a very big task. And as we’ll see, we’ve got lots of slides today on task description, and reporting, contracting, and commentary. As always, I present the full text as is of the task, which is copyright free, but I’m only going to talk about the things that are important. So, we’re not going to go through every little clause of the standard that would be pointless.

O&S Hazard Analysis (T206)

Let’s get started with the purpose. As we’ve already said, it’s to identify and assess those hazards which are introduced by operational and support activities and procedures and evaluate their adequacy. So, we’re looking at operating the system, whatever it may be- And of course, this is a military standard, so we assume a military system, but not all military systems are weapon systems by any means. Not all are physical systems. So, there may be inventory management systems, management information systems, all kinds of stuff. So, does operating those systems and just supporting them (maintaining them are resupplying them, disposing of them, etc.,) does that create any hazards or introduce any hazards? And how do we mitigate? That’s the purpose of the task.

Task Description (T206) #1

Let’s move on to the task description. Again, we’re assuming a contractor is performing the analysis, but that’s not necessarily the case. For this task, this actually says this typically begins during engineering and manufacturing development, or EMD.  So, we’re assuming an American style lifecycle for a big system and EMD comes after concept and requirements development. So, we are beginning to move into the very expensive stage of development for a system where we begin to commit serious money. It’s suggesting that O&SHA can wait until then which is fine in general unless you’ve identified any particularly novel hazards that will need to be dealt with earlier on. As it says, it should build on design hazard analyses, but we’ll also talk about the case later on when there is no design hazard analyses. And the O&SHA shall identify requirements or alternatives or eliminating hazards, mitigating risks, etc. This is one of those tasks where the human is very important – In fact, dominant to be honest. Both as a source of hazards and the potential victim of the associated risks. A lot of human-centric stuff going on here.

Task Description (T206) #2

As always, we’re going to think about the system configurations. We’re going to think about what we’re going to do with the system and the environment that we’re going to do it in. So, a familiar triad and I know I keep banging on about this, but this really is fundamental to bounding and therefore evaluating safety. We’ve got to know what the system is, what we’re doing with it, and the environment in which we’re doing it. Let’s move on.

Task Description (T206) #3

Again, Human Factors, regulatory requirements, and particularly specified personnel requirements need to be thought of. Particularly for operating and support, we need to take into account the staffing and personnel concept that we have. It’s frighteningly easy to produce a system that needs so much maintenance, for example, or support activity that it is unaffordable. And lots and lots of military systems and, it must be said, government and commercial systems in the past have come in that required enormous amounts of support, which soon proved to be unaffordable or no one would sign up to the commitment required. So, lots of projects have simply died because the system was going to be too expensive to sustain. That’s a key point of what we’re doing with O&S here. It’s not just about health and safety. It’s about health and safety, which is affordable.

We also need to look at unplanned events. So, not just designed in things, but things introduced- It says human errors. Again, I’m going to re-emphasize it’s erroneous human action because human error makes it sound like a human is at fault. Whereas very often it’s the design or the concept or the requirements that are at fault and place unacceptable burdens on the human being. Again, lots of messy systems seen in the past, which didn’t quite work and we just kind of expected the operator to cope. And most of the time they cope and then every so often they have a bad day at the office or a bunch of factors come together and lots of people die. And then we blame the human. Well, it’s not the human’s fault at all. We put them in that position. And as always, we need to look at past- Past evaluations of related legacy systems and support operations. If you have good data about legacy systems or about similar systems that your organization or another organization has operated, then that’s gold dust. So, do make an effort to get hold of that information if you can. Maybe a trade association or some wider pan organization body can help you there.

Task Description (T206) #4

At a minimum, we’ve got to identify activities involving known hazards. This assumes that we’ve done some hazard analysis in the past, which is very important. We always need to do that. I’ll come back to that commentary. Secondly, changes needed in requirements, be they functional requirements – what we want the system to do. Or design requirements, if we put constraints on how the system may do it for whatever it may be, hardware, software, support equipment, whatever to make those hazards and risks more manageable. Requirements for safety features – so requirements for engineered features and devices, equipment, because always, in almost any jurisdiction, we will have a hierarchy of control that recognizes that designed and engineered in safety features are more effective than just relying on people to get it right. And then we’ve also got to communicate to people the hazards associated with the system. Warnings, cautions, and whatever special emergency procedures might be required associated with the system. Again, that’s something that we see reinforced in law and regulations in many parts of the world. This is all good stuff. It’s accepted good practice all across the world.

Task Description (T206) #5

Moving on, we also need to think about how are we going to move the system around and the associated spares and supplies? How are we going to package them, handle them, stole them, transport them? Particularly if there are hazardous materials, etc, etc, involved. That’s the next part, G. Again, training requirements. We’re thinking about a human-centric approach. Whatever we expect people to do, they’ve got to be trained in how to do it. Point I, we’ve got to include everything, whether it’s developmental or non-developmental terms. We can’t just ignore stuff because it’s GFE or it’s off the shelf. It doesn’t mean it can never go wrong. Far from it. Particularly if we are putting stuff together that’s never been put together before in a novel combination or in a novel environment. Something that might be perfectly safe and stable in an air-conditioned office might start to do odd things in a much more corrosive and uncontrolled environment, let’s say.

We need to think about what modes might the system be potentially hazardous when under operative control. Particularly, we might think about degraded modes of operation. So, for whatever reason, a part of the system has gone wrong or the system has got into an operating environment within which it doesn’t operate as well as it could. It’s not in an optimal operating environment or state. The human being in control of it, we’re assuming, has still got to be able to operate the system, even if it’s only to shut it down or to get it back into a safer state or safer environment. We’ve got to think about all of those nuances.

Then because we’re talking about support as well, we need to think about a related legacy systems, facilities and processes which may provide background information. Also, of course, the system presumably will very often be operating alongside other systems or it will be supported by all systems maybe that exist or being procured separately. So, we’ve got to think about all those interactions as well and all those potential contributions. As you can see, this is quite a wide-ranging broadly scoped task.

Task Description (T206) #6

Finally, on this section, the customer/the end-user/or whoever may specify some specific analysis techniques. Very often they will not. So, whoever is doing the analysis, be they a contractor or third party outside agency, needs to make sure that whatever they propose to do is going to be acceptable to the program manager. In the sense that it is going to be compatible and relevant and useful. And then finally, the contractor has got to do some O&SHA at the appropriate time but maybe more detailed data will come along later. In which case that needs to be incorporated and also operational changes.

An absolute classic [situation] with military and non-military systems is; the system gets designed, it goes into test and evaluation and we discover that things- assumptions that were made during development- don’t actually hold up. The real world isn’t like that or whatever it might be and we find we’re making changes- making changes in assumptions. Those need to be factored in which, sadly, is often not done very well. So, that’s an important point to think about. What’s my change control mechanism and how will the people doing the and O&SHA find out about these changes? Because very often it’s easy to assume that everybody knows about this stuff but when you start making assumptions, the truth is that it very often goes adrift.

Reporting (T206) #1

Let’s talk about reporting- Just a couple of slides here. In the reporting, there’s some fairly standard stuff in here, the physical and functional characteristics of the system- that’s important. Again, we might assume that everybody knows what they are, but it’s important to put them in. It may be that the people doing the analysis were given a different system description to the people developing the system, to the people doing the personnel planning, etc. All the different things that have to be brought together, we need to make sure that they join up again. It’s too easy to get that wrong. Reinforcing the point I made on the previous slide, as more detailed descriptions and specifications come in that needs to be supplied when it becomes available and provided.

Hazard analysis methods and techniques. What techniques are we using? Give a description. If you’re doing it to a particular standard, so much the better. Great- that saves a lot of paper. What assumptions that we made? What data, both qualitative and quantitative have we used to support analysis? That all needs to be declared. By the way, one of the reasons is to be declared is that when things change- not if- that’s when these assumptions and the data and the techniques get exposed. So, if there are changes, if we don’t have this kind of information declared, we can’t assess the impact changes. And it gets even more difficult to keep up with what’s going on.

Reporting (T206) #2

And then hazard analysis results. Again, the leading particulars of the results should be recorded in the hazard tracking system, the HTS, or hazard log, or risk register- whatever you want to call it. But there will be more detailed information that we wouldn’t want to clutter up the risk register with and we also need to provide warnings, cautions, and procedures to be included in maintenance manuals, training courses, operator manuals, etc. So, we’re going to or we’re probably going to generate an awful lot of data out of this task and that needs to be provided in a suitable format. Again, whoever the program manager on the client-side, or is the end-user representation, needs to think about this stuff quite early on.

Contracting #1

That leads us neatly on to contracting. Now, this task, in theory, can be specified a little bit down the track, after the program started. In practice, what you find is program managers tried to specify everything up front in a single contract for various reasons.

There are good reasons for doing that sometimes. Also, there are bad reasons but I’m not going to talk about this session. We’ll have a talk about planning your system safety program in another session. There’s a lot of nuances in there to be considered.

Just sticking to this task, identification of functional disciplines – who do we need to get involved in order to do this work properly? It’s likely that the safety team if you have one, may not have relevant operating experience or relevant sustainment experience for this kind of system. If they do, that’s fantastic but that doesn’t negate the read the requirement to get the end-user represented and involved. In fact, that’s a near legal requirement in Australia, for example, and in some other jurisdictions. We need to get the end-users involved. We need the discipline specialist to get involved. Typically, your integrated logistic support team, your reliability people, your maintainability, and your testability people, if you have those disciplines. Or maybe you’re calling them something else, it doesn’t really matter.

We need to know what are the reporting requirements. What, if any, analysis methods and techniques do we desire to be used? Maybe the client or end-user has got to jump through some regulatory hoops and therefore they need specific analysis work and safety results to be done and produced. If that’s the case, then that needs to be specified in the contract. And what data is to be generated in what format? And how is it to be reported on when, etc? Considering the hazard tracking system, etc? And then the client may also select or specify known hazards, known hazardous areas, or other specific items to be examined or excluded because maybe it’s being covered elsewhere or we don’t expect the contractor to be able to do this stuff. Maybe we need to use a specialist organization. Again, maybe a regulator has directed us to do so. So, all of these things need to be thought about when we’re putting together the contract requirements for task 206.

Contracting #2

Again, I say this every time, we need to include all items within the scope of the system and the environment, not just developmental stuff. In fact, these days, maybe the majority of programs that I am seeing are mostly non-developmental. So, we’re taking lots of COTS stuff, GFE components, and putting it all together. That’s all going to be included, particularly integration.

We need to think about legacy and related processes and the hazard analysis associated with them if we can get them. They should be supplied to whoever is doing the work and an analyst should be directed to review them and include lessons learned.

Then, reinforcing the previous point that has a tracking system- How will information reported in this task be correlated with tasks and analyses that are being done maybe elsewhere or by different teams? And the example here is 207 health hazard analysis. I’ll talk a little bit about the linkages between the two later. But it’s quite likely in this sort of area there will be large groups of people thinking about operations and maintenance and support. Very often those groups are very different. Sometimes they don’t even talk to each other. That’s the culture in different organizations. You don’t see airline pilots hanging around with baggage handlers very much, do you, down the pub for whatever reason? Different set of people- they don’t always mix very much. And again, you may also have different specialist disciplines, especially the Human Factors people. Again, you’ve got to tie everything in there. So, there’s going to be lots of interfaces in this kind of task that they’ve got to be managed.

Point I – concept of operations. Yes, that’s in every task. You’ve got to understand what we intend to do with this system or what the end-user intends to do with the system in order to have some context for the analysis.

And then finally, what risk definitions and what risk matrix are we using? If we’re not using the standard 882 matrix, then what are we doing?

Commentary #1

I’ve got four slides of commentary now – a number of things to say about Task 206.

Now, I’ve picked an Australian example. So, Task 206 ties in very neatly with Australian WHS requirements. I suspect Australian WHS requirements have been strongly influenced by American OSHA and system safety practices. In Australia, we are heavily influenced by the US approach. This standard and legal requirements in Australia, and in many other states and territories let’s be honest, do tie in nicely with the standard. Although not always perfectly, you’ve got to remember that. So, we do need to focus on operations and support activities. That’s a big part of WHS, thinking about all relevant activities and cradle to grave – the whole life of the system. We need to think about the working environment, the workplace. We need to think about humans as an integral part of the system, be they operators or maintainers, suppliers, other kinds of sustainers. And we need to be providing relevant information on hazards, risks, warnings, trainings, and procedures, and requirements for PPE, and so on and so forth to workers.

So, task 206 is going to be absolutely vital to achieving WHS compliance in Australia and compliance with health and safety legislation and regulations in many parts of the world. In the US and UK and I would say in virtually all developed nations. So, this is a very important task for achieving compliance with the law and regulations. It needs to get the requisite amount of attention- It doesn’t always. People so often on a program during procurement and acquisition development, the technical system is the sexy thing. That’s the thing that gets all the attention, especially early on. The operating and particularly the support side tends to get neglected because it’s not so sexy. We don’t buy a system to support it after all do we? We buy a system to do a job. So, we get the operators in and we get their input on how to optimize the system to do the job most cost-effectively and with most mission effectiveness that we can get out of it. We don’t often think about support effectiveness. But to achieve WHS compliance or the equivalent this is a very important task so we will almost always need to do it.

Commentary #2

The second item to think about – what is going to be key for the maintenance support side is a technique called Job Safety Analysis or Job Hazard Analysis. I’ve highlighted a couple of sources of information there, particularly I would recommend going to the American www.OSHA.gov site and the guidance that they provide on how to do a job hazard analysis. So, use that or use something else if something different is specified in the jurisdiction you’re working it, then go ahead and use that. But if you don’t have any [guidance] on what to do, this will help you.

This is all about – I’ve got a task to do, whatever it might be doing, how do I do it? Let’s analyse this step-by-step, or at least in reasonable size chunks, thinking about how we do the tasks that need to be done. Now, there’s the operator side, and then, of course, we’re always dealing with human beings working on the system or working with the system. So, we’re going to be seeing potentially a lot of Human Factors type techniques being relevant. And there are lots of tasks that we can think about, Hierarchical Task Analysis and that kind of approach is going to fit in with the Job Hazard Analysis as well. Those are going to link together quite well. There will also be things like workload analysis. Particularly for the operators, if we’re asking the operator to do a lot and to maintain a particular level of concentration or respond rapidly, we need to think about workload and too much workload and too little workload can make things worse.

There are lots of techniques out there, I’m not going to talk about Human Factors here. I’m going to be putting on a series on Human Factors techniques in cooperation with a specialist in that area. So, I’m not going to say more here.

For certain kinds of operators, let’s say, pilots, people navigating a ship and so on, drivers, there will be well-established ways that those operators are trained the way they have to operate. There will often be a legal framework and a regulatory framework that says how they have to operate. And then that may direct a particular kind of analysis to be done or a particular approach to be taken for how operators do their jobs. But equally, there is a vast range of operator roles in industry, in chemical plants. Various specialist operating roles where there’s an industry-specific approach to doing things. Or indeed the general approach may be left up to whoever is developing the system. So, there’s a huge range of approaches here that are going to be largely dictated by the concept of operations and also an awareness of what is relevant law, regulation, and good practice in a particular industry, in a particular situation. That’s where doing your Task 203, your safety requirements analysis really kicks in. It’s a very broad subject we’re covering here. You’ve got to get the specialist in to do it well.

Contracting #3

Now, I mention that these days we’re seeing more and more legacy and COTS systems being used and repurposed. Partly to save time and money. We’re not developing mega systems as often as we used to, particularly in defence, but also in many other walks of life as well. So, we may find ourselves evaluating a system where very little technical hazard analysis has been done because there are no developmental items and it’s even difficult to do analysis on legacy or a COTS system because we cannot get the data to do so. Perhaps we can’t get the data for commercial reasons, contractual reasons.

Or maybe we’ve got a legacy system that was developed in a different jurisdiction and whatever information is available with it just doesn’t fit the jurisdictional regulatory system that we’ve got to work in where we want to operate the system. This is very common. Australia, for example, [acquires] a lot of systems from abroad, which have not been developed in line with how we normally do things.

We could in theory just do Task 206 if there was no developmental hazard analysis to do but that’s not quite true. At a minimum, we will always need to do some Preliminary Hazard Listing and hazard analysis – that’s Tasks 201 and 202 respectively. And we will very definitely need to do some System Requirements Hazard Analysis, Task 203, to understand what we need to do for a particular system in a particular application, operating environment, and regulatory jurisdiction. So, we’re always going to have to do those and we may well have to look at the integration of COTS things and do some system-level analysis. That’s 204. We’re definitely going to need to do the early analyses. In fact, the client and the end-user representatives should be doing 201, 202 and 203 and then we may be in a position to finish things off with 206 for certain systems.

Contracting #4

Now, having said that, I’ve mentioned already that Task 206 can be very broad in scope and very wide-ranging. There’s a danger that we will turn Task 206 into a bottomless pit into which we pour money and effort and time without end. So, for most systems, we cannot afford to just do O&SHA across the board without any discernment or any prioritization.

So, we need to look at those other hazard analyses and prioritize those areas where people could get hurt. Particularly we should be using legacy and historical data here to say “What does – in reality, what does hurt people when looking after these systems or operating systems?” Again, as I’ve said before, in many industries there is a standard industry approach or good practice to how certain systems are operated, and maintained, and supported. So, if there is a standard industry approach available – particularly if we can justify that by available historical data – if that [is as good] as doing analysis, then why not just use the standard approach? It’s going to be easier to make a SFARP or a ALARP argument that way anyway. And why spend the money on analysis when we don’t have to? We could just spend the money on actually making the system safer. So, let’s not do analysis for the sake of doing analysis.

Also, there’s a strong synergy between the later tasks in the 200 series. There’s a strong linkage between this Task 206 and 207, which is Health Hazard Analysis. Also, there can be a strong linkage between Task 210, which is the Environmental Hazard Analysis. So, this trio of tasks focuses on the impact on living things, whether they be human beings or animals and plants and ecosystems and very often there’s a lot of overlap between them. For example, hazardous chemicals that are dangerous for humans are often dangerous for animals and plants and watercourses and so on and so forth. I’ll be talking about that more in the next session on Task 207.

One word of warning, however. Certainly, in Australia, we have got fixated on hazardous chemicals because we’ve had some very high-profile scandals involving HAZCHEM in the past. Now, there’s nothing wrong, of course, with learning from experience and applying rigorous standards when we know things have gone wrong in the past. But sometimes we go into a mindset of analysis for analysis sake. Dare I say, to cover people’s backsides rather than to do something useful. So, we need to focus on whether the presence of a HAZCHEM could be a problem. Whether people get exposed to it, not just that it’s there.

Certain chemicals may be quite benign in certain circumstances, and they only become dangerous after an emergency, for example. There are lots of things in the system that are perfectly safe until the system catches fire. Then when you’re trying to dispose or repair a fire damage system that can be very dangerous, for example. So, we need to be sensible about how we go about these things. Anyway, more on that in the next session.

Copyright Statement

That’s the commentary that I have on Task 206. As we said, it links very tightly with other things and we will talk about those in later sessions. I just like to point out that the “italic text in quotations” is from the Mil. standard. That is copyright free as most American government standards are. However, this presentation and my commentary, etc. are copyright of the Safety Artisan 2020.

For More …

Now, for all lessons and resources, please do visit the www.safetyartisan.com. Now, as you’ll notice, it’s an https – it’s a secure website.

End: Operating & Support Hazard Analysis

So, that is the end of the lesson and it just remains for me to say thank you very much for your time and for listening. And I look forward to seeing you again soon. Cheers.

Categories
Mil-Std-882E Safety Analysis

Functional Hazard Analysis

In this full-length (40-minute) session, The Safety Artisan looks at Functional Hazard Analysis, or FHA, which is Task 208 in Mil-Std-882E. FHA analyses software, complex electronic hardware, and human interactions. We explore the aim, description, and contracting requirements of this Task, and provide extensive commentary on it. (We refer to other lessons for special techniques for software safety and Human Factors.)

This is the seven-minute demo; the full version is 40 minutes long.

Topics: Functional Hazard Analysis

  • Task 208 Purpose;
  • Task Description;
  • Update & Reporting
  • Contracting; and
  • Commentary.

Transcript: Functional Hazard Analysis

Click here for the Transcript

Introduction

Hello, everyone, and welcome to the Safety Artisan; Home of Safety Engineering Training. I’m Simon and today we’re going to be looking at how you analyse the safety of functions of complex hardware and software. We’ll see what that’s all about in just a second.

Functional Hazard Analysis

I’m just going to get to the right page. This, as you can see, functional hazard analysis is Task 208 in Mil. Standard 882E.

Topics for this Session

What we’ve got for today: we have three slides on the purpose of functional hazard analysis, and these are all taken from the standard. We’ve got six slides of task description. That’s the text from the standard plus we’ve got two tables that show you how it’s done from another part of the standard, not from Task 208. Then we’ve got update and recording, another two slides. Contracting, two slides. And five slides of commentary, which again include a couple of tables to illustrate what we’re talking about.

Functional Purpose HA #1

What we’re going to talk about is, as I say, functional hazard analysis. So, first of all, what’s the purpose of it? And in classic 882 style, Task 208 is to perform this functional hazard analysis on a system or subsystem or more than one. Again, as with all the other tasks, it’s used to identify and classify system functions and the safety consequences of functional failure or malfunction. In other words, hazards.

Now, I should point out at this stage that the standard is focused on malfunctions of the system. The truth is in the real world, that lots of software-intensive systems have been involved in accidents that have killed lots of people, even when they’re functioning as intended. That’s one of the short-sightedness of this Mil. Standard is that it focuses on failure. The idea that if something is performing as specified, that either the specification might be wrong or there might be some disconnect between what the system is doing and what the human expects – The way the standard is written just doesn’t recognize that. So, it’s not very good in that respect. However, bearing that in mind, let’s carry on with looking at the task.

Functional HA Purpose #2

We’re going to look at these consequences in terms of severity – severity only, we’ll come back to that – for the purpose of identifying what they call safety-critical functions, safety-critical items, safety-related functions, and safety-related items. And a quick word on that, I hate the term ‘safety-critical’ because it suggests a sort of binary “Either it’s safety-critical. Yes. Or it’s not safety-critical. No.” And lots of people take that to mean if it’s “safety-critical, no,” then it’s got nothing to do with safety. They don’t recognize that there’s a sort of a sliding scale between maximum safety criticality and none whatsoever. And that’s led to a lot of bad thinking and bad behaviour over the years where people do everything they can to pretend that something isn’t safety-related by saying, “Oh, it’s not safety-critical, therefore we don’t have to do anything.” And that kind of laziness kills people is the short answer.

Anyway, moving on. So, we’ve got these SCFs, SCIs, SRFs, SRIs and they’re supposed to be allocated or mapped to a system design architecture. The presumption in this – the assumption in this task is that we’re doing early – We’ll see that later – and that system design, system architecture, is still up for grabs. We can still influence it. Often that is not the case these days. This standard was written many years ago when the military used to buy loads of bespoke equipment and have it all developed from new. That doesn’t happen anymore so much in the military and it certainly doesn’t happen in many other walks of life – But we’ll talk about how you deal with the realities later. And they’re allocating these functions and these items of interest to hardware, software and human interfaces. And I should point out, when we’re talking about all that, all these things are complex. Software is complex, human is complex, and we’re talking about complex hardware. So, we’re talking about components where you can’t just say, “Oh, it’s got a reliability of X, and that’s how often it goes wrong” because those type of simple components that are only really subject to random failure, that’s not what we’re talking about here. We’re talking about complex stuff where we’re talking about systematic failure dominating over random, simple hardware failure. So, that’s the focus of this task and what we’re talking about. That’s not explained in the standard, but that’s what’s going on.

Functional HA Purpose #3

Now, our third slide on purpose; so we use the FHA to identify consequences of malfunction or functional failure, lack of function. As I said just now, we need to do this as early as possible in the systems engineering process to enable us to influence the design. Of course, this is assuming that there is a systems engineering process – that’s not always the case. We’ll talk about that at the end as well. And we’re going to identify and document these functions and items and allocate and it says partition them in the software design architecture. When we say partition, that’s jargon for separate them into independent functions. We’ll see the value of that later on. Then we’re going to identify requirements and constraints to put on the design team to say, “To achieve this allocation in this partitioning, this is what you must do and this is what you must not do”. So again, the assumption is we’re doing this early. There’s a significant amount of bespoke design yet to be done.

Task Description (T208) #1

Moving on to task description. It says the contractor, but whoever’s doing the analysis has to perform and document the FHA, to analyse those functions, as it says, with the proposed design. I talked about that already so we’ll move on.

It’s got to be based on the best available data, including mishap data. So, accident/incident data, if you can get it from similar systems and lessons learned. As I always say in these sessions, this is hard to do, but it’s really, really valuable so do put some effort into trying to get hold of some data or look at previous systems or similar systems. We’re looking at inputs, outputs, interfaces and the consequences of failure. So, if you can get historical data or you can analyse a previous system or a similar system, then do so. It will ultimately save you an awful lot of money and heartache if you can do that early on. It really is worth the effort.

Task Description (T208) #2

At a minimum, we’ve got to identify and evaluate functions and to do that, we need to decompose the system. So, imagine we’ve got this great big system. We’ve got to break it down into subsystems of major components. We’ve got to describe what each subsystem and major component does, its function or its intended function. Then we need a functional description of interfaces and thinking about what connects to what and the functional ins and outs. I guess pretty obvious stuff  – needs to be done.

Task Description (T208) #3

And then we also need to think about hazards associated with, first of all, loss of function. So, no function when we need it. Now, we have degraded functional malfunction and sort of functioning out of time or out of sequence. So, we’ve got different kinds of malfunctions. What we don’t have here is function when not required. So, the system goes active for some reason and does something when it’s not meant to. Now, if we add that third base and we’ve got a functional failure analysis.

Essentially here, we’re talking about a functional failure analysis, maybe something a bit more sophisticated, like a HAZOP. And the HAZOP is more sophisticated because instead of just those three things that can go wrong, we think about we’ve got lots of guide words to help us think about ‘out of time, out of sequence’. So, too early, too late, before intended, after intended, whatever it might be. And there are there variations on HAZOP called computer HAZOP, or CHAZOP, where people have come up with different keywords, different prompt words, to help you think about software in data-intensive systems. So, that’s a possible technique to use here.

And then when we’re thinking about these hazards that might be generated by malfunction, or functional failure in its various forms, we need to think about, “What’s the next step in the mishap sequence? In the accident sequence? And what’s the final outcome of the accident sequence?” And that’s very important for software because software is intangible. It has no physical form. On its own, in isolation, software cannot possibly hurt anyone. So, you’ve got to look at how the software failure propagates through the system into the real world and how it could harm people. So, that’s a very important prompt that that last sentence in yellow there.

Task Description (T208) #4

And we carry on. We need to assess the risk with failure of a function subsystem or component. We’re going to do so using the standard 882 tables, tables one and two, and risk assessment codes in table three, unless we come up with our own tailored versions of those tables and that matrix and that’s all approved. In reality, most people don’t tailor this stuff. They should make it appropriate for the system, but they rarely do.

Table I and II

So just to remind us what we’re talking about, here’s table one and two. Table one is severity categories ranging from catastrophic, which could kill somebody – a catastrophic outcome – down to negligible, where we’re talking cuts and bruises – very, very, very minor injuries.

And then table two, probability levels. We’ve got everything from frequent down to eliminated – There’s no hazard at all because we’ve eliminated. It will never happen in the lifetime of the universe. So, it really is a zero probability. We’ve got frequent down to improbable and then in the standard, we’ve got a definition for these things in words, for a single item and also for a fleet or inventory of those items, assuming that there’s a large number of them. And that’s very useful. That helps us to think about how often something might go wrong per item and per fleet.

Table III

So, that’s tables one and two, we put them together, the severity and the probability to give us table three. As you can see, we’ve got probability down the left-hand side and at the bottom, if we’ve eliminated the hazard, then there is no severity. The hazard is completely eliminated. So, forget about that row. Then everything else we’ve got frequent down to improbable, probability. And we’ve got catastrophic down to negligible. Together those generate the risk assessment code, which is either high, serious, medium or low. That’s the way this standard defines things. Nothing is off-limits. Nothing is perfect except for elimination. We’ve just defined a level of risk and then you have to make up rules about how you will treat these levels of risk. The standard does some of that for you, but usually, you’ve got to work out depending on which jurisdiction you’re in legally, what you’re required to do about different levels of risk.

Now this table on its own, I’ll just mention, is not helpful in a British or Australian jurisdiction where we have to reduce or eliminate risks SOFARP. The table on its own won’t help you do that, because this is just an absolute level of risk. It’s not considering what you could have done to make it better. It’s just saying where we are. It’s a status report.

So, those are your tables one, two and three, as the standard describes them. That’s the overall method and we’re going to do what it says in Section four of the standard. In the main body of the standard, Section four talks about software and complex hardware and how we allocate these things.

Task Description (T208) #5

And then finally, I think on task description, an assessment of whether the functions identified are to be implemented in the design – sorry, of whether the functions are to be implemented in the design and map those functions into the components. And then it says functions allocated to software should be matched to the lowest level of technical design or configuration item. So, if you’ve got a software or hardware configuration item that is further subdivided into sub-items, then you need to go all the way down and see which items can contribute to that function and which can’t.

That’s an important labour-saving device, because if you’ve got  – you could have quite a large configuration item, but actually, only a tiny bit contributes to the hazard. So, that’s the only thing you need to worry about in theory. In reality, partitioning software is not as easy as the standard might suggest. However, if we can do a meaningful partition, then we could and should aim to have as little software safety-related as we possibly can. If nothing else, for cost in order to get the project in on time. So, the less criticality we have in our system, the better.

Task Description (T208) #6

So, we need to assess the software control category for each configuration item that’s been allocated a safety-significant software function (SSSF). Having assigned the SCC, we then have to work at the software criticality index for each of those functions and we’ll talk about how to do that at the end. Then from all of this work, we need to generate a list of requirements and constraints to include in the spec which, if they work, will eliminate the hazard or reduce the risk.

And the standard talks about that these could be in the form of fault tolerance, fault detection, fault isolation, fault annunciation or warning, or fault recovery. Now, this breakdown reveals – basically this is a reliability breakdown. So, in the world of reliability, we talk typically about fault tolerance, fault detection, warning, and recovery. Four things – I mean they split them down to five here. Now, software reliability is highly controversial. So really, this is a bit of a mismatch here. These reliability-based suggestions are not necessarily much use for software, or indeed for people sometimes. You may have to use other more typical software techniques to do this and in fact, the standard does point you to do that. But that’s for another session.

FHA Update & Records #1

So, we’ve done the FHA, or we’re doing the FHA. We’ve got to record it and we’ve got to update it when new information comes through. So, we’ve got to update the FHA as the design progresses or operational changes come in. We’ve got to have a system description of the physical and functional characteristics of the system and subsystems. And of course, for design complex items like software, context is everything. So, this is very important. Again, software in isolation cannot hurt anyone. You’ve got to have the context to understand what the implications might be. If we don’t have that, we’re stuffed pretty much. Then it goes on to say that when further documentation becomes available, more detail that needs to be supplied. So, don’t forget to ask for that in your contract and expect it as well and be ready to deal with it.

FHA Update & Records #2

 Moving on. When it comes to hazard analysis, method and techniques, we need to describe the method and the technique used for the analysis, what assumptions and what data was used in support of the analysis and this statement is pretty much in every single task so I’ll say no more. You’ve heard this before. Then again, analysis results need to be captured in the hazard tracking system and, as I’ve always said, usually the leading details, the top-level details, go in there has a tracking system. The rest of it goes into the hazard analysis report otherwise, you end up with a vast amount of data in your HTS and it becomes unwieldy and potentially useless.

Contracting #1

Contracting – Again, this is a pretty standard clause, or set of clauses, in a Mil. Standard 882 task. So, in our request for proposal and statement of work, we’ve got to ask for Task 208. We’ve got to point the analyst, the contractor, at what we want them to analyse particularly or maybe as a minimum. And what we don’t want to analyse, maybe because it’s been done elsewhere or it’s out of scope for this system.

We need to say what are data reporting requirements are considering Task 106, which is all about hazard tracking system or the hazard log or the risk register, whatever you want to call it. So, what data do we want? What format? What are the definitions, etc.? Because if you’re dealing with multiple contractors or you want data that is compatible with the rest of your inventory, then you’ve got to specify what you want. Otherwise, you’re going to get variability in your data and that’s going to make your life a whole lot harder downstream – Again, this is standard stuff.

And what are the applicable requirements, specifications and standards? Of course, this is an American standard so compliance with specifications, requirements and standards is all because that’s the American system.

Contracting #2

We need to supply the concept of operations, as I’ve said before, with a complex design. Especially software, context is everything. So, we need to know what we’re going to do with the system that the software is sat within. So, this system has got some functions, this is what we’re looking at in Task 208: What are those functions for? How do they to relate with the real world? How could we hurt people? And then if we got any other specific hazard management requirements. Maybe we’re using a special matrix because we’ve decided the standard matrix isn’t quite right for our system. Whatever we’re doing, if we’ve got special requirements that are not the norm for the vanilla standard, that we need to say what they are. Pretty straightforward stuff.

Commentary #1

We’re onto commentary, and I think we’ve got five slides of commentary today.

As it says, functional hazard analysis depends on systems engineering. So, if we don’t have good systems engineering, we’re unlikely to have good functional analysis. So, what do I mean by good systems engineering? I mean, that for the complete system – apart from things that we deliberately excluded for a good reason – but for the complete system we need or functions to be identified, we need those functions to be analysed and allocated correctly in accordance and rigorously and consistently. We need interface analysis, control, and we need the architecture of the design to be determined based on the higher-level requirements, all that work that we’ve done.

Now, if those things are not done or they’re incomplete, or they were done too late to influence the design architecture, then you’re going to have some compromised systems engineering. And these days, because we’re using lots of commercial off the shelf stuff, what you find is that your top-level design architecture is very often determined before you even start because you’ve decided you’re going to have an off the shelf this and you’re going to have a modified off the shelf that and you’re going to put them together in a particular way with a set of business rules, a concept of operations, that says this is how we’re going to use this stuff. And our new system interfaces with some existing stuff and we can’t modify the existing stuff.

So, that really limits what we can do with the design architecture. A lot of the big design decisions have already been taken before we even got started. Now, if that’s the case, then that needs to be recognized and dealt with. I’ve seen those things dealt with well. In other words, the systems engineering has been done recognizing those constraints, those things that that can’t be done. And I’ve seen it done badly in that figuratively speaking, the systems engineering team or the program manager, whoever has just given us a Gallic shrug and gone “Yeah, what the heck, who cares?” So, there’s this the two extremes that you can see.

Now, if the systems engineering is weak or incomplete, then you’re going to get a limited return on doing Task 208. Maybe there are some areas where you can do it on new areas, or maybe you’ve got a new interface that’s got to be worked up and created in order to get these things to talk to each other. Clearly, there is some mileage in doing that. You’re going to get some benefits from doing that in that area. But for the stuff that’s already been done, probably – Well, what what’s the point of doing systems engineering here? What does it achieve? So, maybe in those circumstances, it’s better – Well, in fact, I would say it’s essential to understand where systems engineering is still valid, where you are still going to get some results and where it isn’t. And maybe you just declare that scope; What’s in and out.

Or maybe you take a different approach. Maybe you go “OK, we’re dealing with a predominantly COTS system. We need a different way of dealing with this than the way the Mil. standard 882 assumes.” So, you’re going to have to do some heavy tailoring of the standard because 882 assumes that you’re determining all these requirements predesigned. If that’s not the case, then maybe 882 isn’t for you. Or maybe you just need to recognize you’re going to have to hack it about severely. Which in turn means you’ve got to know what you’re doing fundamentally. In which case the standard really is no longer fulfilling its role of guiding people.

Commentary #2

Moving on. Let’s assume that we are still going to do some Task 208. We’re going to determine some software criticality. We’re also going to determine some criticality for complex hardware. So, things whether it be software in complex electronics, so pre-programmed electronics, whatever that might be. First of all, as we said before, we’re going to determine the software control category and what that’s really saying is how much authority does the software have? And then secondly, we’re going to be looking at severity, which was table one. How severe is the worst hazard or risk that the software could contribute to? And these are illustrated in the next two slides. And we do a session or several sessions on software safety is coming soon. That will be elsewhere. I’m not going to go into massive detail here. I’m just giving you an overview of what the task requires.

Commentary #3: Software Control Categories 1-5

First of all, how do we determine software control category? So, there’s the table from the standard. We’ve got five levels of SCC.

At the top, we’ve got autonomous. Basically, the software does whatever it wants to and there’s no checks and balances.

Secondly, they’re semi-autonomous. The software is there’s one software system performing a function, but there are hardware interlocks and checks. And those hardware interlocks and checks, and whatever else that are not software, can work fast enough to prevent the accident happening. So, they can prevent harm. So, that’s semi-autonomous.

Then we’ve got redundant fault-tolerant where you’ve got an architecture typically with more than one channel, and maybe all channels are software controlled. Maybe there’s diversity in the software and there is some fault-tolerant architecture. Maybe a voting system or some monitoring system saying, “Well, Channel Three’s output is looking a bit dodgy” or “Something gone wrong with Channel two”. I’ll ignore the channel at fault, and I’ll take the good output from the channels that are still working and I’ll use that. So that’s that option. Very common.

Then we’ve got number four, which is influential. So, the software is displaying some information for a human to interpret and to accept or reject.

And then we’ve got five, which is no safety impact at all. Now, the problem is this: because it’s very easy to say, “The software just displays some information, it doesn’t do anything”. So, unless a human does something – so we don’t have to worry about the safety implications of that at all. Wrong! Because the human operator may be forced to rely on the software output by circumstances, there may not be time to do anything else. Or the human may not be able to work out what’s going on without using the software output. Or more typically, the humans have just got used to the software generating the correct information or even they interpret it incorrectly.

And a classic example of that was when the American warship, the USS Vincennes, shot down an airliner and killed three hundred people because the way the system was set up, the supposedly not safety-related radar system was displaying information not associated with the airliner, but associated with the with a military Iranian aircraft. And the crew got mixed up and shot down the airliner. So, that’s a risky one. Even though it’s down at number four, that doesn’t mean it’s without risk or without criticality.

Commentary #4

So, if we have the software control category, and that’s down the right-hand side – sorry down the left-hand side, one to five. And along the top, we have the severity category from catastrophic down to negligible. We can use that to determine the software criticality index, which varies from one most critical down to five least critical. It’s similar to the risk assessment code in the table three coloured matrix that I showed you earlier. So, we’ve made – the writers of the standard have made a determination for us based on some assessment that they’ve done saying, “Well, this is this is how we assess these different criticality levels”. Whether there is actually any real-world evidence supporting this assessment, I don’t know and I’m not sure anybody else does either. However, that’s the standard and that’s where we are.

Commentary #5

And so just to finish up on the commentary. Task 208 is focused on software engineering, also programmable electronics, complex hardware, but typically electronics with software functionality or logic functionality embedded within it. Now if all of that software, all that programmable electronic systems, if they’re all developed already, is there any point in doing Task 208? That’s the first step – it’s got to pass the “So what?” test.

Is it feasible to do 208 and expect to get benefits? If not, maybe you just do system and subsystem hazard analysis. That’s tasks 205 and 204, respectively. And we just look at the complex components and subsystems as a black box and say, “OK, what’s it meant to do? What are the interfaces?” Maybe that would be a better thing to do. Particularly bearing in mind that the software or the complex electronic system could be working perfectly well and we still get an accident because there’s been a misunderstanding of the output. Maybe it’s more beneficial to look at those interfaces and think about, “Well, in what scenarios could the human misunderstand? How do we how do we guard against that?”

It’s also worth saying that some particularly American software development standards, can work well with Mil. Standard 882 because they share a similar conceptual basis. For example, I’ve seen many, many times in the air world, the systems software system safety standard is 882 and the systems software standard is DO-178 (AKA ED12, it’s the same standard, just different labels). Now they work relatively well together because the concept underpinning 178 is very similar to 882. It’s American centric.

It’s all about, you put requirements on the software development and – this is sort of a cookbook approach – the standard assumes that if you use the right ingredients and you mix them up in the right way, then you’re going to get a good result. And that’s a similar sort of concept for 882 and the two work relatively well together, fairly consistently. Also because they’re both American, there’s a great focus on software testing. Certainly, in the earlier versions of DO-178, it’s exclusively focused on software testing. Things like source code analysis and other things – more modern techniques that have come in – they’re not recognized at all in earlier versions of 178 because they just weren’t around.

That focus on testing suits 882, because 882, generates lots of requirements and constraints which you need to test. What it’s not so good at is generating cases where you say, “Well if this goes wrong” or “If we’re at the edge of the envelope where we should be, let’s test for those edge of the envelope cases, let’s test that the software is working correctly when it’s outside of the operating envelope that it should be”. Now, that kind of thinking isn’t so strong in 882, nor in 178. So, there are some limitations there. Good practice, experienced practitioners will overcome those by adding in the smarts that the standards lack. But just to be aware, a standard is not smart. You’ve still got to know what you’re doing in order to get the most out of it.

So, maybe you’re buying software that’s predevelopment or that you’re using – you’re not in the States. You’ve got a European or an Asian Indian supplier or Japanese supplier or whatever. Maybe they’re not using American style techniques and standards. Is that – how well is that going to work with 882? Are they compatible? They might be, but maybe they’re not. So, that requires some thought. If they’re not obviously compatible, then what do you need to do to make that translation and make it work. Or at least understand where the gaps are and what you might do about it to compensate?

And I’ve not talked about data, but it is worth mentioning that with data-rich systems these days – and I heard just the other day, is it two quintillion bytes of data being generated every two days or something ridiculous? That was back in 2017. So, gigantic amounts of data being generated these days and used by computing systems, particularly artificial intelligence systems. So, the rigour associated with that data  – the things that we need to think about on data are potentially just as important as the software. Because if the software is processing rubbish data, you’re probably going to get rubbish results. Or at the very least unreliable results that you can’t trust. So, you need to be thinking about all of those attributes of your data; correct, complete, consistent, etc, etc. I mean, I probably need to do a session on that and maybe I will.

Copyright Statement

That’s the presentation. As you can see, everything in italics and quotes is out of the standard, which is copyright free. But this presentation is copyright of the Safety Artisan.

For More…

And you will find many more presentations and a lot more resources at the website www.safetyartisan.com. Also, you’ll find the paid videos on our Patreon page, which is www.patreon.com/SafetyArtisan or go to Patreon and search for the Safety Artisan.

End

Well, that’s the end of our presentation, and it just remains for me to say thanks very much for listening. Thanks for your time and I look forward to seeing you in the next session, Task 209. Looking forward to it. Goodbye.

End

Categories
Mil-Std-882E Safety Analysis

System Hazard Analysis

In this 45-minute session, The Safety Artisan looks at System Hazard Analysis, or SHA, which is Task 205 in Mil-Std-882E. We explore Task 205’s aim, description, scope, and contracting requirements. We also provide value-adding commentary, which explains SHA – how to use it to complement Sub-System Hazard Analysis (SSHA, Task 204) in order to get the maximum benefits for your System Safety Program.

This is the seven-minute-long demo. The full video is 47 minutes long.

System Hazard Analysis: Topics

  • Task 205 Purpose [differences vs. 204];
    • Verify subsystem compliance;
    • ID hazards (subsystem interfaces and faults);
    • ID hazards (integrated system design); and
    • Recommend necessary actions.
  • Task Description (five slides);
  • Reporting;
  • Contracting; and
  • Commentary.
Transcript: System Hazard Analysis

Introduction

Hello, everyone, and welcome to the Safety Artisan, where you will find professional, pragmatic, and impartial safety training resources and videos. I’m Simon, your host, and I’m recording this on the 13th of April 2020. And given the circumstances when I record this, I hope this finds you all well.

System Hazard Analysis Task 205

Let’s get on to our topic for today, which is System Hazard Analysis. Now, system hazard analysis is, as you may know, is Task 205 in the Mil. Standard 882E system safety standard.

Topics for this Session

What we’re going to cover in this session is purpose, task description, reporting, contracting and some commentary – although I’ll be making commentary all the way through. Going to the back to the top, the yellow highlighting with this and with task 204, I’m using the yellow highlighting to indicate differences between 205 and 204 because they are superficially quite similar. And then I’m using underlining to emphasize those things that I want to really bring to your attention and emphasize. Within task 205, purpose. We’ve got four purposes for this one. Verify subsistent compliance and recommend necessary actions – fourth one there. And then in the middle of the sandwich, we’ve got identification of hazards, both between the subsystem interfaces and faults from the subsystem propagating upwards to the overall system and identifying hazards in the integrated system design. So, quite different emphasis to 204 which was really thinking about subsystems in isolation. We’ve got five slides of task description, a couple on reporting, one on contracting – nothing new there – and several commentaries.

System Requirements Hazard Analysis (T205)

Let’s get straight on with it. The purpose, as we’ve already said, there is a three-fold purpose here; Verify system compliance, hazard identification and recommended actions, and then, as we can see in the yellow, the identifying previously unidentified hazards is split into two. Looking at subsystem interfaces and faults and the integration of the overall system design. And you can see the yellow bit, that’s different from 204 where we are taking this much higher-level view, taking an inter subsystem view and then an integrated view.

Task Description (T205) #1

On to the task description. The contract has got to do it and documented, as usual, looking at hazards and mitigations, or controls, in the integrated system design, including software and human interface. It’s very important that we’ll come onto that later. All the usual stuff about we’ve got to include COTS, GOTS, GFE and NDI. So, even if stuff is not being developed, if we’re putting together a jigsaw system from existing pieces, we’ve still got to look at the overall thing. And as with 204, we go down to the underlined text at the bottom of the slide, areas to consider. Think about performance, and degradation of performance, functional failures, timing and design errors, defects, inadvertent functioning – that classic functional failure analysis that we’ve seen before. And again, while conducting this analysis, we’ve got to include human beings as an integral component of the system, receiving inputs, and initiating outputs.  Human factors were included in this standard from long ago.

Task Description (T205) #2

Slide two. We’ve got to include a review of subsystem interrelationships. The assumption is that we’ve previously done task 204 down at a low level and now we’re building up to task 205. Again, verification of system compliance with requirements (A.), identification of new hazards and emergent hazards, recommendations for actions (B.), but Part C is really the new bit. We are looking at possible independent, dependent, and simultaneous events (C.) including system failures, failures of safety devices, common cause failures, and system interactions that could create a hazard or increase risk. And this is really the new stuff in 205 and we are going to emphasize in the commentary, you’re going to look very carefully at those underlying things because they are key to understanding task 205.

Task Description (T205) #3

Moving on to Slide 3, all new stuff, all in yellow. Degradation of the system or the total system (D.), design changes that affect subsystems (E.). Now, I’ve underlined this because what’s the constant in projects? It’s change. You start off thinking you’re going to do something and maybe the concept changes subtly or not so subtly during the project. Maybe your assumptions change the schedule changes, the resources available change. You thought you were going to get access to something, but it turns out that you’re not. So, all these things can change and cause problems, quite frankly, as I am sure we know. So, we need to deal with not just the program as we started out, but the program as it turns out to be – as it’s actually implemented. And that’s something I’ve seen often go awry because people hold on to what they started out with, partly because they’re frightened of change and also because of the work of really taking note changes. And it takes a really disciplined program or project manager to push back on random change and to control it well, and then think through the implications. So, that’s where strength of leadership comes in, but it is difficult to do.

Moving on now. It says effects of human errors (F.) in the blue, I’ve changed that. Human error implies that the human is at fault, that the human made a mistake. But very often, we design suboptimal systems and we just expect the human operator to cope. Whether it’s fair or unfair or unreasonable, it results in accidents. So, what we need to think about more generally is erroneous human action. So, something has gone wrong but it’s not necessarily the humans’ fault. Maybe the system has induced the human to make an error. We need to think very carefully about.

Moving on, determination (G.), potential contribution of all those components in G. 1. As we said before, all the non-developmental stuff. G.2, have design requirements in the specifications being satisfied? This standard emphasizes specifications and meeting requirements, we’ve discussed that in other lessons. G.3 and whether methods of system implementation have introduced any new hazards. Because of course, in the attempted to control hazards, we may introduce technology or plant or substances that themselves can create problems. So, we need to be wary of that.

Task Description (T205) #4

Moving on to slide four. Now, in 205.2.2, the assumption here is that the PM has specified methods to be used by the contractor. That’s not necessarily true, the PM may not be an expert in this stuff. While they may for contractual or whatever reasons have decided we want the contractor to decide what techniques to use. But the assumption here is that the PM has control and if the contractor decides they want to do something different they’ve got to get the PM’s authority to do that. This is assuming, of course, that the this has been specified in the contract.

And 205.2.3, whichever contractor is performing the system hazard analysis, the SHA, they are expected to have oversight of software development that’s going to be part of their system. And again, that doesn’t happen unless it’s contracted. So, if you don’t ask for it, you’re not going to get it because it costs money. So, if the ultimate client doesn’t insist on this in the contract and police it to be fair because it’s all very well asking for stuff. If you never check what you’re getting or what’s going on, you can’t be sure that it’s really happening. As an American Admiral Rickover once said, “You get the safety you inspect”. So, if you don’t inspect it, don’t expect to get anything in particular, or it’s an unknown. And again, if anything requires mitigation, the expectation in the standard is that it will be reported to the PM, the client PM this is and that they will have authority. This is an assumption in the way that the standard works. If you’re not going to run your project like that, then you need to think through the implications of using this standard and manage accordingly.

Task Description (T205) #5

And the final slide on task description. We’ve got another reminder that the contractor performing the SHA shall evaluate design changes. Again, if the client doesn’t contract for this it won’t necessarily happen. Or indeed, if the client doesn’t communicate that things have changed to the contractor or the subcontractors don’t communicate with the prime contractor then this won’t happen. So, we need to put in place communication channels and insist that these things happen. Configuration control, and so forth, is a good tool for making sure that this happens.

Reporting (T205) #1

So, if we move on to reporting, we’ve got two slides on this. No surprises, the contractor shall prepare a report that contains the results from the analysis as described. First, part A, we’ve got to have a system description. Including the physical and functional characteristics and subsystem interfaces. Again, always important, if we don’t have that system description, we don’t have the context to understand the hazard analysis that had been done or not being done for whatever reason. And the expectation is that there will be reference to more detailed information as and when it becomes available. So maybe detailed design stuff isn’t going to emerge until later, but it has to be included. Again, this has got to be required.

Reporting (T205) #2

Moving onto parts B and C. Part B as before we need to provide a description of each analysis method used, the assumptions made, and the data used in that analysis. Again, if you don’t do this, if you don’t include this description, it’s very hard for anybody to independently verify that what has been done is correct, complete, and consistent. And without that assurance, then that’s going to undermine the whole purpose of doing the analysis in the first place.

And then part C, we’ve got to provide the analysis results and at the bottom of this subparagraph is the assumption. The analysis results could be captured in the hazard tracking system, say the hazard log, but I would only expect the sort of leading to be captured in that hazard log. And the detail is going to be in the task 205 hazard analysis report, or whatever you’re calling it. We’ve talked about that before, so I’m not going to get into that here.

Contracting

And then the final bit of quotation from the standard is the contracting. And again, all the same things that you’ve seen before. We need to require the task to be completed. It’s no good just saying apply Mil. Standard 882E because the contractor, if they understand 882E, they will tailor it to suit selves, not the client. Or if they don’t understand 882E they may not do it at all, or just do it badly. Or indeed they may just produce a bunch of reports that have got all the right headings in as the data item description, which is usually supplied in the contract, but there may be no useful data under those headings. So, if you haven’t made it clear to the contractor, they need to conduct this analysis and then report on the results – I know it sounds obvious. I know this sounds silly having to say this, but I’ve seen it happen. You’ve got a contractor that does not understand what system safety is.

(Mind you, why have you contracted them in the first place to do this? You should know that you should have done your research, found out.)

But if it’s new to them, you’re going to have to explain it to them in words of one syllable or get somebody else to do it for them. And in my day job, this is very often what consultancies get called in to do. You’ve got a contractor who maybe is expert building tanks, or planes, or ships, or chemical plants, or whatever it might be, but they’re not expert in doing this kind of stuff. So, you bring in a specialist. And that’s part of my day job.

So, getting back to the subject. Yes, we’ve got to specify this stuff. We’ve got to specify it early, which implies that the client has done quite a lot of work to work this all out. And again, the client may above the line, as we say, say engage a consultant or whoever to help them with this, a specialist. We’ve got to include all of the details that are necessary. And of course, how do you know what’s necessary, unless you’ve worked it out. And you’ve got to supply the contractor, it says concept of operations, but really supplying the contractor with as much relevant data and information as you can, without bogging them down. But that context is important to getting good results and getting a successful program.

Illustration

I’ve got a little illustration here. The supposition in the standard in Task 205 is we’ve got a number of subsystems and there may be some other building blocks in there as well. And some infrastructure we’ve going to have probably some users, we’re going to have an operating environment, and maybe some external systems that our system, or the system of interest, interfaces with or interacts with in some way. And that interaction might be deliberate, or it might be just in the same operating environment at night. And they will interact intentionally or otherwise.

Commentary – Go Early

With that picture in mind, let’s think about some important points. And the first one is to get 205, get some 205-work done early. Now, the implication in the standard by the numbering and when you read the text is that subsystem hazard analysis comes first. You do those hexagonal building blocks first and then you build it up and task 205 comes after the subsystem hazard analysis. You thought, “Well, you’ve already got the SHHAs for each subsystem and then you build the SHA on top”. However, if you don’t do 205 early, you’re going to lose an opportunity to influence the design and to improve your system requirements. So, it’s worth doing an initial pass of 205 first, top-down, before you do the 204 hexagons and then come back up and redo 205. So, the first pass is done early to gain insight, to influence the design, and to improve your requirements, and to improve, let’s say, the prime contractor’s appreciation and reporting of what they are doing. And that’s really, dare I say, a quick and dirty stab at 205 could be quite cheap and will probably the payback/the return on investment should be large if you do it early enough. And of course, act on the results.

And then the second part is more about verifying compliance, verifying those as required interfaces, and looking at emergent stuff, stuff that’s emerged – the devil’s in the detail as the saying goes. We can look at the emerging stuff that’s coming out of that detail and then pull all that together and tidy up it up and look for emergent behaviour.

Commentary – Tools & Techniques

Looking at tools and techniques, most safety analysis techniques that we use look at single events or single failures only in isolation. And usually, we expect those events and failures to be independent. So, there’re lots of analyses out there. Basic fault tree analysis, event tree analysis. Well, event tree is slightly different in that we can think about subsequent failures, but there’re lots of basic techniques out there that will really only deal with a single failure at a time. However, 205.2.1C requires us to go further. We’ve got to think about dependent simultaneous events and common cause failures. And for a large and complex system, each of those can be a significant undertaking. So, if we’re doing task 205 well, we are going to push into these areas and not simply do a copy of task 204, but at a higher level. We’re now really talking about the second pass of 205. The previous, quick and dirty, 205 is done. The task 204 on the subsystems is done. Now we’re pulling it all together.

Dependent & Simultaneous Events

Let’s think about independent simultaneous events. First, dependent failures. Can an initial failure propagate? For example, a fire could lead to an explosion or an explosion could lead to a fire. That’s a classic combination. If something breaks or wears could be as simple as components wearing and then we get debris in the lubrication system. Could that – could the debris from component wear clog up the lubrication system and cause it to fail and then cause a more serious seizure of the overall system? Stuff like that. Or there may be more subtle functional effects. For example, electric effects, if we get a failure in an electrical system or even non-failure events that happen together. Could we get what’s called a sneak circuit? Could we get a reverse flow of current that we’re not expecting? And could that cause unexpected effects? There’s a special technique we’re looking at called sneak circuits analysis. That’s sneak, SNEAK, go look it up if you’re interested. Or could there be multiple effects from one failure? Now, I’ve already mentioned fire. It’s worth repeating again. Fire is the absolute classic. First, the effects of fire. You’ve got the fire triangle. So, to get fire, we need an inflammable substance, we need an ignition source, and we need heat. And without all three, we don’t get a fire. But once we do get a fire, all bets are off, and we can get multiple effects. So, we recall, you might remember from being tortured doing thermodynamics in class, you might remember the old equation that P1V1T1 equals P2V2T2. (And I’ve put R2 that for some reason, so sorry about that.)

What that’s saying is, your initial pressure, volume and temperature multiplied together, P1V1T1, is going to be the same as your subsequent pressure, volume and temperature multiply together, P2V2T2. So, what that means is if you dramatically increase the temperature say, because that’s what a fire does, then your volume and your pressure are going to change. So, in an enclosed space we get a great big increase in pressure, or if we’re in an unenclosed space, we’re going to get an increase in volume in a [gas or] fluid. So, if we start to heat the [gas or] fluid, it’s probably going to expand. And then that could cause a spill and further knock-on effects. And fire, as well as effect making pressure and volume changes to the fluids, it can weaken structures, it makes smoke, and produces toxic gases. So, it can produce all kinds of secondary hazardous effects that are dangerous in themselves and can mess up your carefully orchestrated engineering and procedural controls. So, for example, if you’ve got a fire that causes a pressure burst, you can destroy structures and your fire containment can fail. You can’t send necessarily people in to fix the problem because the area is now full of smoke and toxic gas. So, fire is a great example of this kind of thing where you think, “Well, if this happens, then this really messes up a lot of controls and causes a lot of secondary effects”. So, there’s a good example, but not the only one.

And then simultaneous events, a hugely different issue. What we’re talking about here is we have got undetected, or latent, failures. Something has failed, but it’s not apparent that it’s failed, we’re not aware, and that could be for all sorts of reasons. It could be a fatigue failure. We’ve got something that’s cracked, or it could be thermal fatigue. So, lots of things that can degrade physical systems, make them brittle. For example, an odd one, radiation causes most metals to expand and neutron bombardment makes them brittle. So, it can weaken things, structure and so forth. Or we might have a safety system that has failed, but because we’ve not called upon it in anger, we don’t notice. And then we have a failure, maybe the primary system fails. We expect the secondary system to kick in, but it doesn’t because there’s been some problem, or some knock-on effect has prevented the secondary system from kicking in. And I suspect we’ve all seen that happen.

My own experience of that was on a site I was working on. We had a big electricity failure, a contractor had sawed through the mains electricity cable or dug through it. And then, for some unknown reason, the emergency generators failed to kick in. So, that meant that a major site where thousands of people worked had to be evacuated because there was no electricity to run the computers. Even the old analogue phones failed after a while. Today, those phones would be digital, probably voice over IP, and without electricity, they’d fail instantly. And eventually, without power for the plumbing, the toilets back up. So, you’re going to end up having to evacuate the entire site because it’s unhygienic. So, some effects can be very widespread. Just because you had a late failure, and your backup system didn’t kick in when you expected it to.

So how can we look at that? Well, this is classic reliability modelling territory. We can look at meantime between failures, MTBF, and meantime to repair (MTTR) and therefore we could work out what the exposure time might be. We can work out, “What’s the likelihood of a latent failure occurring?” If we’ve got an interval, presumably we’ve going to test the system periodically. We’ve got to do a proof test. How often do we have to do the proof test to get a certain level of reliability or availability when we need the system to work? And we can look at synchronous and asynchronous events. And to do that, we can use several techniques. The classic ones, reliability, lock diagrams and fault tree analysis. Or if we’ve got repairable systems, we can use Markov chain modelling, which is very powerful. So, we can bring in time-dependent effects of systems failing at certain times and then being required, or systems failing and being repaired, and look at overall availability so that we can get an estimate of how often the overall system will be available. If we look at potential failures in all the redundant constituent parts. Lots of techniques there for doing that, some of them quite advanced. And again, very often this is what safety consultants, this is what we find ourselves doing so.

Common Cause Failures

Common cause failure, this is another classic. We might think about something very obvious and physical, maybe we get debris, maybe we’ve got three sets of input channels guarded by filters to stop debris getting into the system, but what if debris blocks all the filters so we get no flow? So, obvious – I say obvious – often missed sources of sometimes quite major accidents. Or let’s say something more subtle, we’ve got three redundant channels, or a number of redundant channels, in an electronic system and we need two out of three to work, or whatever it might be. But we’ve got the same software working each channel. So, if the software fails systematically, as it does, then potentially all three channels will just fail at the same time.

So, there’s a good example of non-independent failures taking down a system that on paper has a very high reliability but actually doesn’t. Once you start considering common cause failure or common mode analysis. So, really what we would like is we would like all redundancy to be diverse if possible. So, for example, if we wanted to know how much fuel we had left in the aeroplane, which is quite important if you want the engines to keep working, then we can employ diverse methods. We can use sensors to measure how much fuel is in the tanks directly and then we can cross-check that against a calculated figure where we’ve entered, let’s say, how much fuel was in the tanks to start with. And then we’ve been measuring the flow of fuel throughout the flight. So, we can calculate or estimate the amount of fuel and then cross-check that against the actual measurements in the tanks. So, there’s a good diverse method. Now, it’s not always possible to engineer a diverse method, particularly in complex systems. Sometimes there’s only really one way of doing something. So, diversity kind of goes out of the window in such an engineered system.

But maybe we can bring a human in.

So, another classic in the air world, we give pilots instruments in order to tell them what’s going on with the aeroplane, but we also suggest that they look out the window to look at reality and cross-check. Which is great if you’re not flying a cloud or in darkness and there are maybe visual references so you can’t necessarily cross-check. But even things like system failures, can the pilot look out the window and see which propeller has stopped turning? Or which engine the smoke and flames coming out of? And that might sound basic and silly, but there have been lots of very major accidents where that hasn’t been done and the pilots have shut down the wrong engine or they’ve managed the wrong emergency. And not just pilots, but operators of nuclear power plants and all kinds of things. So, visual inspection, going and looking at stuff if you have time, or take some diverse way of checking what’s going on, can be very helpful if you’re getting confusing results from instrument readings or sensor readings.

And those are examples of the terrific power of human diversity. Humans are good at taking different sensory inputs and fusing them together and forming a picture. Now, most of the time they fuse the data well and they get the correct picture, but sometimes they get confused by a system or they get contradictory inputs and they get the wrong mental model of what’s going on and then you can have a really bad accident. So, thinking about how we alert humans, how we use alarms to get humans attention, and how we employ human factors to make sure that we give the humans the right input, the right mental picture, mental model, is very important. So, back to human factors again, especially important, at this level for task 205.

And of course, there are many specialist common cause failure analysis techniques so we can use fault trees. Normally in a fault tree when you’ve got an and gate, we assume that those two sub-events are independent, but we can use ‘beta factors’ (they’re called) to say, “Let’s say event a and event b are not independent, but we think that 50 percent or 10 percent of the time they will happen at the same time”. So, you can put that beta factor in to change the calculation. So, fault trees can cope with non-independent fate is providing you program the logic correctly. You understand what’s going on. And maybe if there’s uncertainty on the beta factors, you must do some sensitivity modelling on the tree with different beta factors. Or you run multiple models of the tree, but again, we’re now talking quantitative techniques with the fault tree, maybe, or semi-quantitative. We’re talking quite advanced techniques, where you would need a specialist who knows what they do in this area to come up with realistic results, that sensitivity analysis. The other thing you need to do is if the sensitivity analysis gives you an answer that you don’t want, you need to do something about that and not just file away the analysis report in a cupboard and pretend it never happened. (Not that that’s ever happened in real life, boys and girls, never, ever, ever. You see my nose getting longer? Sorry, let’s move on before I get sued.)

So other classic techniques. Zonal hazard analysis, it looks at lots of different components in a compartment. If component A blows up, does it take out everything else in that compartment? Or if the compartment floods, what functionality do we lose in there? And particularly good for things like ships and planes, but also buildings with complex machinery. Big plant where you’ve got different stuff in different locations. There’re also things called particular risk analysis where you think of, and these tend to be very unusual things where you think about what a fan blade breaks in a jet engine. Can the jet engine contain the fan blade failure? And if not, where you’ve got very high energy piece of metal flying off somewhere – where does that go? Does that embed itself in the fuselage of the aeroplane? Does it puncture the pressure hull of the aeroplane? Or, as has sadly happened occasionally, does it penetrate and injure passengers? So, things like that, usually quite unusual things that are all very domain or industry specific. And then there are common mode analysis techniques and a good example of a standard that incorporates those things is ARP 4761. This is a civil aircraft standard which looks at those things quite well, for example, there are many others.

Summary

In summary, I’ve emphasized the differences between Task 205 and 204. So, we might do a first pass 205 and 204 where we’re essentially doing the same thing just at different levels of granularity. So, we might do the whole system initially 205, one big hexagon, and then we might break down the jigsaw and do some 204 at a more detailed level. But where 205 is really going to score is in the differences between 204. So instead of just repeating, it’s valuable to repeat that analysis at a higher-level, but really if we go to diversify if we want success. So, we need to think about the different purpose and timing of these analyses. We need to think about what we’re going to get out of going top-down versus bottom-up, different sides of the ‘V’ model let’s say.

We need to think about the differences of looking at internals versus external interfaces and interactions, and we need to think of appropriate techniques and tools for all those things – and, of course, whether we need to do that at all! We will have an idea about whether we need to do that from all the previous analysis. So, if we’ve done our PHI or PHA, we’ve looked at the history and some simple functional techniques, and we’ve involved end-users and we’ve learnt from experience. If we’ve done our early tasks, we’re going to get lots of clues about how much risk is present, both in terms of the magnitude of the risk and the complexity of the things that we’re dealing with. So, clearly, we’ve got a very complex thing with lots of risks where we could kill lots of people, we’re going to do a whole lot more analysis than for a simple low-risk system. And we’re going to be guided by the complexity and risks and the hot spots where they are and go “Clearly, I’ve got a particular interface or particular subsystem, which is a hotspot for risk. We’re going to concentrate our effort there”. If you haven’t done the early analysis, you don’t get those clues. So, you do the homework early, which is quite cheap and that helps you. Direct effort, the best return on investment.

The Second major bullet point, which I talk about this again and again. That the client and end-user and/or the prime contractor need to do analysis early in order to get the benefits and to help them set requirements for lower down the hierarchy and pass relevant information to the sub-contractors. Because the sub-contractors, if you leave them in isolation, they’ll do a hazard analysis in isolation, which is usually not as helpful as it could be. You get more out of it if you give them more context. So really, the ultimate client, end-user, and probably the prime as well, both need to do this task, even if they’re subcontracting it to somebody else. Whereas, maybe the sub-system hazard analysis 204 could be delegated just down to the sub-system contractors and suppliers. If they know what they’re doing and they’ve got the data to do it, of course. And if they haven’t, there’s somebody further up the food chain on the supply chain may have to do that.

And lastly, 204 and 205 are complimentary, but not the same. If you understand that and exploit those similarities and differences, you will get a much more powerful overall result. You’ll get synergy. You’ll get a win-win situation where the two different analyses complement, reinforce each other. And you’re going to get a lot more success probably for not much more money and effort time. If you’ve done that thinking exercise and really sought to exploit the two together, then you’re going to get a greater holistic result.

Copyright

So, that’s the end of our session for today. Just a reminder that I’ve quoted from the Mil. Standard 882, which is copyright free, but the contents of this presentation are copyright Safety Artisan, 2020.

For More …

And for more lessons and more resources, please do visit www.safetyartisan.com and you can see the videos at www.patreon.com/safetyartisan.

End

That’s the end of the lesson on system hazard analysis task 205. And it just reminds me to say thanks very much for watching and look out for the next in the series of Mil. Standard 882 tasks. We will be moving on to Task 206, which is Operating and Support Hazard Analysis (OSHA), a quite different analysis to what we’ve just been talking. Well, thanks very much for watching and it’s goodbye from me.

The End

You can find a free pdf of the System Safety Engineering Standard, Mil-Std-882E, here.