Transcript: Sub-System Hazard Analysis (T204)

Here is the full transcript: Sub-System Hazard Analysis.

In the video lesson, The Safety Artisan looks at Sub-System Hazard Analysis, or SSHA, which is Task 204 in Mil-Std-882E. We explore Task 204’s aim, description, scope and contracting requirements. We also provide value-adding commentary and explain the issues with SSHA – how to do it well and avoid the pitfalls.

Introduction

Hello, everyone, and welcome to the Safety Artisan, where you will find professional, pragmatic, and impartial instruction on all things system safety. I’m Simon – I’m your host for today, as always and it’s the fourth of April 22. With everything that’s going on in the world, I hope that this video finds you safe and well.

Sub-System Hazard Analysis

Let’s move straight on to what we’re going to be doing. We’re going to be talking today about subsystem hazard analysis and this is task 204 under the military standard 882E. Previously we’ve done 201, which was preliminary hazard identification, 202, which is preliminary hazard analysis, and 203, which is safety requirements hazard analysis. And with task 204 and task 205, which is system has analysis, we’re now moving into getting stuck into particular systems that we’re thinking about, whether they be physical systems or intangible. We’re thinking about the system under consideration and I’m really getting into that analysis.

Topics for this Session

So, the topics that we’re going to cover today, I’ve got a little preamble to set things in perspective. We then get into the three purposes of task 204. First, to verify compliance. Secondly, to identify new hazards. And thirdly, to recommend necessary actions. Or in fact, that would be recommend control measures for hazards and risks. We’ve got six slides of task description, a couple of slides on reporting, one on contracting, and then a few slides on some commentary where I put in my tuppence worth and I’ll hopefully add some value to the basic bones of the standard. It’s worth saying that you’ll notice that subsystem is highlighted in yellow and the reason for that is that the subsystem and system hazard analysis tasks are very, very similar. They’re identical except for certain passages and I’ve highlighted those in yellow. Normally I use a yellow highlighter to emphasize something I want to talk about. This time around, I’m using underlining for that and the yellow is showing you what these different for subsystem analysis as opposed to system. And when you’ve watched both sessions on 204 and 205, I think you’ll see the significance of why I’ve done.

Preamble – Sub-system & System HA

Before we get started, we need to explain the system model that the 882 is assuming. If we look on the left-hand side of the hexagons, we’ve got our system in the centre, which we’re considering. Maybe that interfaces with other systems. They work within operating environment; hence we have the icon of the world, and the system and maybe other systems are there for a purpose. They’re performing some task; they’re doing some function and that’s indicated by the tools. We’re using the system to do something, whatever it might be.

Then as we move to the right-hand side, the system is itself broken down into subsystems. We’ve got a couple here. We’ve got sub-system A and B and then A further broken down into A1 and A2, for example. There’s some sort of hierarchy of subsystems that are coming together and being integrated to form the overall system. That is the overall picture that I’d like to bear in mind while we’re talking about this. The assumption in the 882, is we’re going to be looking at this subsystem hierarchy bottom upwards, largely. We’ll come on to that.

System Requirements Hazard Analysis (T204)

The purpose of the task, as I’ve said before, it’s threefold. We must verify subsystem compliance with requirements. Requirements to deal with risk and hazards. We must identify previously unidentified hazards which may emerge as we’re working at a lower level now. And we must recommend actions necessary. That’s further requirements to eliminate all hazards or mitigate associated risks. We’ll keep those three things in mind and that will keep coming up.

Task Description (T204) #1

The first of six slides on the task description. Basically, we are being told to perform and document the SSHA, sub-system hazard analysis. And it’s got to include everything, whether it be new developments, COTS, GOTS, GFE, NDI, software and humans, as we’ll see later. Everything must be included. And we’re being guided to consider the performance of the subsystem: ‘What it is doing when it is doing it properly’. We’ve got to consider performance degradation, functional failures, timing errors, design errors or defects, and inadvertent functioning – we’ll come back to that later. And while we’re doing analysis, we must consider the human as a component within the subsystem dealing with inputs and making outputs. If, of course, there is an associated human. We’ve got to include everything, and we’ve got to think about what could go wrong with the system.

Task Description (T204) #2

The minimum that the analysis has got to cover is as follows. We’ve got to verify subsystem compliance with requirements and that is to say, requirements to eliminate hazards or reduce risks. The first thing to note about that is you can’t verify compliance with requirements if there are no requirements. if you haven’t set any requirements on the subsystem provider or whoever is doing the analysis, then there’s nothing to comply with and you’ve got no leverage if the subsystem turns out to be dangerous. I often see it as it gets missed. People don’t do their top-down systems engineering properly; They don’t think through the requirements that they need; and, especially, they don’t do the preliminary hazard identification and analysis that they need to do. They don’t do Task 203, the SRHA, to think about what requirements they need to place further down the food chain, down the supply chain. And if you haven’t done that work, then you can’t be surprised if you get something back that’s not very good, or you can’t verify that it’s safe. Unfortunately, I see that happen often, even on exceptionally large projects. If you don’t ask, you don’t get, basically.

We’ve got two sub-paragraphs here that are unique to this task. First, we’ve got to validate flow down of design requirements. “Are these design requirements valid?”, “Are they the right requirements?” From the top-level spec down to more detailed design specifications for the subsystem. Again, if you haven’t specified anything, then you’ve got no leverage. Which is not to say that you have to dive into massive detail and tell the designer how to do their job, but you’ve got to set out what you want from them in terms of the product and what kind of process evidence you want associated with that product.

And then the second sub-paragraph, you’ve got to ensure design criteria in the subsystem specs have been satisfied. We need to verify that they’re satisfied, and that V and V of subsystem mitigation measures or risk controls have been included in test plans and procedures. As always, the Mil. standard 882 is the American standard, and they tend to go big on testing. Where it says test plans and procedures that might be anything – you might have been doing V and V by analysis, by demonstration, by testing, by other means. It’s not necessarily just testing, but that’s often the assumption.

Task Description (T204) #3

We must also identify previously unidentified hazards because we are now down at a low level of detail in a subsystem and stuff probably will emerge at that level that wasn’t available before. First, number one, we’ve got to ensure the implementation of subsystem design requirements and controls. And ensure that those requirements and controls have not introduced any new hazards, because very often accidents occur. Not because the system has gone wrong – the system is working as advertised – but the hazards with normal operation maybe just weren’t appreciated and guarded against or we just didn’t warn the operators that something might happen that they needed to look out for. A common shortfall, I’m afraid.

And number two, we’ve got to determine modes of failure down to component failure and human errors, single points of failure, common-mode failures, effects when failures occur in components, and from functional relationships. “What happens if something goes wrong over on this side of the system or subsystem and something else is happening over here?” What are those combinations? What could result? And again, we’ve got to consider hardware and software, including all non-developmental type stuff, and faults, and occurrences. Again, I see very often, buyers/purchases don’t think about the off the shelf stuff in advance or don’t include it. And then sometimes also you see contractors going “This is off the shelf, so we’re not analysing it.” Well, the standard requires that they do analyse it to the extent practicable. And they’ve got to look at what might go wrong with all of this non-development to stuff and integrate the possible effects and consider. That’s another common gotcha, I’m afraid. we do need to think about everything, whether it’s developmental or not.

Task Description (T204) #4

And then part C, recommending actions necessary to eliminate hazards if we can. Very often we can’t, of course, and we have to mitigate. We must reduce or minimize the associated risk of those hazards. In terms of the harm that might come to people. We’ve got to ensure that system-level hazards, it says attributed. Maybe we believe when we did the earlier analysis that the subsystem could contribute to a higher-level hazard, or maybe we’ve allocated some failure budget to this particular subsystem, which it has got to keep to if we’re going to meet the higher-level targets. You can imagine lots of these subsystems all feeding up a certain failure rate and different failure modes. And overall, when you pull it all together, we may have to meet some target or reduce the number of failures in their propagation upwards in order to manage hazards and risks. We’ve got to make sure that we’ve got adequate mitigation controls of these potential hazards are implemented in the design.

If we think back to the hierarchy, we prefer to fix things in the design, eliminate the hazard if possible, or make changes to the design to eliminate or reduce the hazard, rather than just rely on human beings to catch the problem and deal with it further downstream. It’s far more effective and cheaper, in the long run, to fix things in design they are more effective controls. Certainly, in this standard in Australian law, and in the UK and elsewhere, you will find either regulations or law or codes of practice or recognized and accepted good practice that says, “You should do this”. It’s a very, very common requirement and we should pretty much assume that we have to do this.

Task Description (T204) #5

Interesting clause here in 2.2, it says if no specific hazard analysis techniques are directed or the contractor wants to take a different route to what is directed, then they’ve got to obtain approval from the program manager. If the PM (Project Manager) hasn’t specified analysis techniques, and they may not wish to, they may just wish to say you’ll do whatever analysis is required in order to identify hazards and mitigate them. But in many industries, there are certain ways of doing things and I’ve said before in previous lessons, if you don’t specify that you want something, then contractors will very often cut the safety program to the bone in order to be the cheapest bid. the customer will get what they prioritize. If the customer prioritizes a cheap bid and doesn’t specify what they want, then they will get the bare minimum that the contractor thinks they can get away with. If you don’t ask, you don’t get – Becoming a theme that isn’t it?

Task Description (T204) #6

Let’s move on to 2.3. Returning to software, we’ve got to include that. The software might be developed separately, but nevertheless, the contractor performing the SSHA shall monitor the software development, shall obtain data from each phase of the software development process in order to evaluate the contribution of the software to the subsystem hazard analysis. There’s no excuse for just ignoring the software and treating it as a black box. Of course, very often these days the software is already developed. It’s a GFE or NDI item, but there still should be evidence available or you do a black-box analysis of the subsystem that the software is sitting in. Again, if the software developer reports any identified hazards, they’ve got to be reported to the program manager in order to request appropriate direction.

This assumes a level of interaction between the software developers right up the chain to the program manager. Again, this won’t happen unless the program manager directs it and pays for it. If the PM doesn’t want to pay for it, then they are either going to have to take a risk on not knowing about the functionality of the software that’s hidden within the subsystem. Or they’re going to deal with it some other way, which is often not effective. The PM needs to do a lot of work upfront in order to think what kind of problems there might be associated with a typical subsystem of whatever kind it is we’re dealing with. And think about “How would I deal with the associated risks?” “What’s the best way to deal with them in the circumstances?” If I’m buying stuff off the shelf and I’m not going to get access to hazard analysis or other kinds of evidence, how am I going to deal with them? Big questions.

And then 2.4, the contractor shall update the SSHA following changes, including software design changes. Again, we can’t just ignore those things.  That’s slide six out of six. Let’s move on to reporting.

Reporting (T204) #1

The first slide, contractor’s got to prepare a report that contains results from the task, including within the system description, physical and functional characteristics of the system, a list of the subsystems, and a detailed description of the subsystem being analysed, including its boundaries. And from other videos, you’ll know how much and how often I emphasize knowing where the boundaries are because you can’t really do effective safety analysis and safety management on an unbounded system. It just doesn’t work. There’s a requirement here for quite a lot of information reference to more detailed descriptions as they become available. The standard says they shall be supplied. That’s a lot of information that probably texts and pictures of all sorts of stuff and that’s going to need to go into a report. And typically, we would expect to see a hazard analysis report or a HAR with this kind of information in it. Again, if the PM/customer doesn’t specify that HAR, then they’re not going to get it and they’re not going to get textual information that they need to manage the overall system.

Reporting (T204) #2

So, if we move on to parts B and C of the reporting requirement. We’ve got to describe hazard analysis methods and techniques, provide a description of each method and the technique used, and a description of the assumptions made. And it says for each qualitative or quantitative data. This is another area that often gets missed. If you don’t know what techniques have been used and you don’t know the assumptions that almost certainly that subsystem analyser will have to make because they probably don’t have visibility in the rest of the system. If you don’t have that information, it becomes very difficult to verify the hazard analysis work and to have confidence in it.

And the hazard analysis results. Content and format vary. Something else the PM is going to think about and specify upfront. Then results should be captured the hazard tracking system. Now, usually, this hazard tracking system is hazard log. It might be a database, a spreadsheet or even a word document, or something like that. And usually, in the hazard tracking system, we have the leading particulars. We don’t always have, in fact, we shouldn’t have, every little piece of information in the hazard tracking system because it will quickly become unwieldy. Really, we want the hazard log to have the leading particulars of all the hazards, causes, consequences and controls. And then the hazard log should refer out to that hazard analysis report or other reports and data, whatever they’re called, other records.

If we go back up, this reemphasizes the kind of detail that’s here in 2.5 A. That really shouldn’t be going in the hazard log. That should be going in a separate report which the hazard log/the hazard tracking system refers to. Otherwise, it all gets that unwieldy.

Contracting

I’ve said repeatedly the PM needs to think about this and ask for that.

Contracting; The standard assumes that the information in A to H below is specified way up front in the request for proposal. That’s not always possible to do in full detail, but nevertheless, you’ve got to think about these things really early and include them in the contractual documentation. And again, if your if you’re running a competition, by the time you get to the final RFP, you need to make sure that you’re asking for what you really need. maybe run a preliminary expression of interest or pre-competition exercise in order to tease out, detect. We’ve got to impose task 204 (A.) as a requirement. We may have to specify which people we want to involve, which functional specialists, which discipline specialists (B.). We want to get involved to address this work. Identification of subsystems to be analysed (C.). Well, if you don’t know what the design is upfront, we can’t always do that, but you could say all.

You may specify desired analysis methodologies and techniques (D.). And again, that’s largely domain dependent. We tend to do safety in certain ways in different worlds, in the air world is done in a particular way. in the maritime world, it’s a different way. With Road or Off-Road Vehicle, it’s done in a particular way, etc, etc., whatever it might be. Chemical plant, whatever. If they’re known hazards, hazardous areas or other specific items be examined or excluded (E.) because they’re covered adequately elsewhere. The PM or the client has got to provide technical data on all those non-development developmental items (F.), particularly if they’re specifying that the contractor will use them. If the client says “You will use this. You will use these tires, therefore, this data with these tires” or whatever it might be, you’re going to – we want a system that’s going to use to standardized spares of standardized fuel or whatever it might be or is maintainable by technicians and mechanics with these standard skill sets. There may be all sorts of reasons for asking or forcing contracts to do certain things, in which case the purchaser is responsible for providing that data.

And again, many purchases forget to do that entirely or do it very badly, and then that can cripple a safety program. What’s the concept of operations (G.). What are we going to do with this stuff? What’s the context? What’s the big picture? That’s important. And any other specific requirements (H.). What risk matrix? What risk definitions are we using on this program? Again, important otherwise, different contractors do their own thing, or they do nothing at all. And then the client must pick up pieces afterwards, which is always time-consuming and expensive and painful. And it tends to happen at the back of a program when you’re under time pressure anyway. It’s never a happy place to be. do make sure that clients and purchases that you’ve done your homework and specified this stuff upfront, even if it turns out to be not the best thing you could have specified, it’s better to have an 80 percent solution that’s pretty standard and locked down.

Commentary #1

That’s the wording that’s in the task with some commentary by myself. Now some additional commentary. It says right up front, areas to consider include performance, performance degradation, functional failures, timing errors, design errors or defects and inadvertent function. What we have here basically is a causal analysis, there will be some simple techniques that you can use to identify this kind of stuff. Something like a functional failure analysis or a failure modes effects analysis, which is like an FFA, but an FMEA requires design to work on. And, FMEA, a variant of FME is FMECA, where we include the criticality of the failure as it possibly propagates out the hierarchy of the system.

These sorts of techniques will think about what could go wrong, no function when required, inadvertent function – the subsystem functions when it’s not supposed to – and incorrect function, and there’s often multiple versions of incorrect function. considering all of those causes, all of those failure modes and if we’re doing a big safety program on something quite critical, very often the those identified faults and failures and failure modes will feed into the bottom of a fault tree where we have a hierarchical build-up of causation and we look at how redundancy and mitigation and control measures mitigate those low-level failures and hopefully prevent them from becoming full-blown incidents and accidents.

And these techniques, particularly the FFA in the FMEA, are also good for hazard identification and for investigating performance and non-compliance issues. you can apply an FFA and FMEA those type of techniques to a specification and say “We’ve asked for this. What could happen if we get what we ask for?” What could go wrong? And, what could go wrong with these requirements?

Commentary #2

Now, the second part that I’ve chosen to highlight a consideration of the human within a subsystem and this is important. Traditionally, it’s not always been done that well. Human factors, I’m glad to say is becoming more prominent and more used both because in many, many systems, human is a key component, is a key player in the overall system. And in the past, we have tended to build systems and then just expect the human operator and maintainer to cope with the vicissitudes of that system. maybe the system isn’t that well designed in terms of it is not very usable, its performance depends on being lovingly looked after and tweaked and maybe systems are vulnerable to human error, and even induce human error. We need to get a lot better at designing systems for human use.

So, we could use several techniques. We could use a HAZOP, a hazard analysis operability study to consider information flows to and from the human. There are lots of specialist human factors analyses out there. And I’m hoping to run a series of human factors sessions, interviewing a very knowledgeable colleague of mine but more on that later. that will come in due course. We’ll look at those specialist human analysis techniques. But there’s been a couple of conceptual models around for quite a long time, about 20 years now at least, for how to think about humans in the system.

Human-System Models

So, we’ve got a 5M model and the SHELL model. I’m just going to briefly illustrate those. Now, both models are taken from the US Federal Aviation Authority System Safety Handbook, which dates to the end of 2000. These have been around a long time and they were around before the year 2000, and they’re quite long in the tooth.

We’ve got the SHELL model, which considers our software, hardware, environments and live-ware – the human. And there’s quite a nice checklist on Wikipedia for things to consider. We’re considering all the different interfaces between those different elements. That’s at the hyperlink you can see at the bottom of the slide.

Then on the right-hand side, we’ve got the 5M model and apologies for the gendered language. Where the five Ms are the man/the human, the machine, management, the media – and the media is the environment for operating and maintenance environment – and then in the middle is the mission. the humans, the machines, the systems, and the management come together in order to perform a mission within a certain environment. that’s another very useful way of conceptualizing our contribution of humans and interaction between human and system. Human operators usually are maintainers, frontline staff, and management, all in a particular operating environment and environmental context and how they come together to accomplish the mission or the function of the system, whatever it might be.

Now a word of caution, on this. It’s possible to spend gigantic sums of money on human analysis. very often we tend to target it at the most critical points and we very often target it at the operator, particularly for those phases of operation where the operator must do things in a limited amount time. the operator will be under pressure and if they don’t take the right action within a certain time, something could go wrong. we do tend to target this analysis in those areas and tend to spend money hopefully in a sensible and targeted way.

Commentary #3

My final slide on the additional commentary. The other things we’ve talked about for this task, compliance checking. We should get a subsystem specification. If we don’t get a subsystem specification, well, what are the expectations on the subsystem? Are they documented anywhere? Is it in the consent to box? Is there an interface requirement document or are there interface control documents for other systems that or subsystems that interface with our subsystem – anywhere where we can get information. if we have a subsystem spec, a bunch of functional requirements say, early on we could do a functional failure analysis of those functional requirements. we can do this work really quite early if we need to and think about, “Well, what interfaces are expected or required from our subsystem?” versus “What is our subsystem actually do?” any mismatches that could give rise to problems.

So, this is a type of activity where we’re looking for continuity and we’re looking for coherence across the interface. And we’re looking for things to join up. And if they don’t join up or they’re mismatched, then there’s a potential problem. And, as we look down into the subsystem, are there any derived safety requirements from above that says this subsystem needs to do this or not do that in order to manage a hazard? Those are important to identify.

Again, if it’s not been done probably the subsystem contractor won’t do it because it’s extra expense. And they may well truly believe that they don’t need to. We’re all proud of the things that we do, and we feel sometimes emotionally threatened if somebody suggests a piece of kit might go wrong and it does blind people to potential problems.

If going the other way, we are a higher-level authority where a system prime contractor or something and we’ve got to look at the documentation from a subsystem supplier. Well, we might find out some information from sales brochures or feature lists, or there might be a description of the benefits or the functions of the system with its outputs. We hopefully should be able to get hold of some operating and maintenance manuals. And very often those manuals will contain warnings and cautions and say, “You must look after the piece of cake by doing this”. I’m thinking the gremlins now “Don’t feed it after midnight or get it wet” otherwise bad things will happen. Sorry about that, slightly fatuous example, but a good illustration, I think. And ideally, if there’s any training materials associated with the piece of kit, is there a training needs analysis that shows how the training was developed? It’s very often in a TNA if it’s done well, there’s lots of good information in there. Even if it’s not quite for the same application that weighs in the piece of kit for, you could learn a lot from that kind of stuff

And finally, if all else fails, if you’ve got a legacy piece of kit, then you can physically inspect it. And if you can take it apart, put it back together again – do so. You might discover there’s asbestos in it. You might discover that lithium batteries or whatever it might be, fire hazards, flammable materials, toxic materials, you name it. there’s a lot of ways that we can get information about the subsystem. Ideally, we ask for everything upfront. Say, you know, if there’s any hazardous chemicals in there, then you must provide the hazard sheets and the hazard data in accordance with international or national standards and so on and so forth. But if you can’t get that or you haven’t asked for it, there are other ways of doing it, but they’re often time-consuming and not the optimal way of doing it.

So again, do think about what you need upfront and do ask for it. And if the contractor can’t supply exactly what you want, what you need, you then have to decide whether you could live with that, whether you could use some of these alternative techniques or whether you just have to say, “No, thanks. I’ll go to another supplier of something similar”. And I may have to pay more for it, but I’ll get a better-quality product that actually comes with some safety evidence that means I can actually integrate it and use it within my system. Sometimes you do have to make some tough decisions and the earlier we do those tough decisions the better, in my experience.

Copyright Statement

So that’s all the technical content. Just to say that all the text that’s in italics and in speech marks is from the standard, which is copyright free. But this presentation, and especially all the commentary and the added value, is copyright at the Safety Artisan 2020.

For More …

And if you want more videos like this, rest in the 882 series and other resources on safety topics, you can find them at the website www.safetyartisan.com. And you can also go to the safety artisan page at Patreon. that’s www.pateron.com and search for Safety Artisan – all one word.

End

So, that’s the end of the presentation and it just remains for me to say, thanks very much for watching and supporting the Safety Artisan. And I’ll be doing Task 205 system hazard analysis next in the series, look forward to seeing you again soon. Goodbye, everyone.

Back to the Home Page | Mil-Std-882 Page | System Safety Page