Transcript: Preliminary Hazard List (T201)

Here is the full transcript: Preliminary Hazard List (Task 201 in Mil-Std-882E).

The full video is here.

Preliminary Hazard Identification

Hello, everyone, and welcome to the Safety Artisan, where you will find instructional materials that are professional, pragmatic and impartial because we don’t have anything to sell and we don’t have an axe to grind. Let’s look at what we’re doing today, which is Preliminary Hazard Identification. We are looking at one of the first actual analysis tasks in Mil-Std-882E, which is a systems safety engineering standard from the US government, and it’s typically used on military systems, but it does turn up elsewhere.

Preliminary Hazard ID is Task 201.

I’m recording this on the 2nd of February 2020, however, the Mil-Std has been in existence since May 2012 and it is still current, it looks like it is sticking around for quite a while, this lesson isn’t likely to go out of date anytime soon.

Topics for this session

What we’re going to cover is, quoting from the task, first of all, we’re going to look at the purpose and the task description, where the task talks quite a lot about historical review (I think we’ve got three slides of that), recording results, putting stuff in contracts and then I’m adding some commentary of my own. I will be commenting all the way through, that’s the value add, that’s why I’m doing this, but then there’s some specific extra information that I think you will find helpful, should you need to implement Task 201. In this session, we’ve moved up one level from awareness and we are now looking at practice, at being equipped to actually perform safety jobs, to do safety tasks.

Preliminary Hazard Identification (T201)

The purpose of Task 201 is to compile a list of potential hazards early in development. two things to note here: it is only a list, it’s very preliminary. I’ll keep coming back to that, this is important. Remember, this is the very first thing we do that’s an analytical task. There are planning tasks in the 100 series, but actually some of them depend on you doing Task 201 because you can’t work out how are you going to manage something until you’ve got some idea of what you’re dealing with. We’ll come back to that in later lessons.

It is a list of potential hazards that we’re after, and we’re trying to do it early in development. And I really can’t overemphasise how important it is to do these things early in development, because we need to do some work early on in order to set expectations, in order to set budgets, in order to set requirements and to basically get a grip, get some scope on what we think we might be doing for the rest of the program. this is a really important task and it should be done as early as possible, and it’s okay to do it several times. Because it’s an early task it should be quick, it should be fairly cheap. We should be doing it just as soon as we can when we’re at the conceptual stage when we don’t even have a proper set of requirements and then we redo it thereafter maybe. And maybe different organisations will do it for themselves and pass the information on to others. And we’ll talk about that later as well.

The task description. It says the contractor shall – actually forget about who’s supposed to do it, lots of people could and should be doing this as part of their project management or program management risk reduction because as I said, this is fundamental to what we’re doing for the rest of the safety program and indeed maybe the whole project itself. So, what we need to do is “examine the system shortly after the material solution analysis begins and compile a Preliminary Hazard List (PHL) identifying potential hazards inherent in the concept”. That’s what the standard actually says.

A couple of things to note here. Saying that you start doing it after material solution analysis has begun might be read as implying you don’t do it until after you finish doing the requirements, and I think that’s wrong, I think that’s far too late. to my mind, that is not the correct interpretation. Indeed, if we look at the last four words in the definition, it says we’re “identifying potential hazards inherent in the concept”. that, I think, gives us the correct steer. we’ve got a concept, maybe not even a full set of requirements, what are the hazards associated with that concept, with that scope? And I think that’s a good way to look at it.

Historical Review

This task places a great deal of emphasis on review of historical documentation, and specifically on reviewing documentation with similar and legacy systems. an old system, a legacy system that we are maybe replacing with this system but there might be other legacy systems around. We need to look at those systems. The assumption is that we actually have some data from similar and legacy systems. And that’s a key weakness really with this, is that we’re assuming that we can get hold of that data. But I’ll talk about the issues with that when I get to my commentary at the end.

We need to look at the following (and it says including but not limited to).

a) Mishap and incident reports, this is a US standard. they talk about mishaps because they’re trying to avoid saying accidents because that implies that something has gone wrong accidentally. Whereas the term mishap, I believe, is meant to imply that it might be accidental, it might be deliberate, whatever it might be, it doesn’t matter, something has gone wrong. An undesirable event has happened, it’s a mishap. we need to look at mishap and incident reports. Well, that’s great, if you’ve got them if they’re of good quality.

b) You need to look at hazard tracking systems. When the Mil-Std talks about hazard tracking systems it is referring to what you and I might describe as a hazard log or a risk register. It doesn’t really matter what they called, where are you storing information about your hazards? And indeed, the tracking implies that they are live hazards, in other words, associated with a live system and things are dynamic and changing. But don’t worry about that, you should, we should, be looking in our hazard logs, in our risk registers, that kind of thing.

c) Can we look at lessons learned? Fantastic, again, if we’ve got them. But unfortunately, learning lessons can be a somewhat political exercise, unfortunately. it doesn’t always happen.

d) We need to look at previous safety analysis and assessments. That’s fantastic. If we’ve got stuff that’s even halfway relevant, maybe we could use it and save ourselves a lot of time and trouble. Or maybe we could look at what’s around and go, actually, I think that’s not suitable because…, and then even that gives you a steer to say, we need to avoid what’s gone wrong with the previous set of analysis. But hopefully without just throwing them out and dismissing them out of hand, because that’s far too easy to do (not invented here, I didn’t do it, therefore it’s no good). Human pride is a dangerous thing.

e) It says health hazard information. Maybe there are some medical results, some toxicology, maybe we’ll be tracking the exposure of people to certain toxins in similar systems. What can we learn from that?

f) And test documentation. let’s look at these legacy systems. What went right, what didn’t go right and what had to be done about it. all useful sources of information.

g) And then that list continues. Mil-Std 882 includes environmental impact, its safety and environmental impact is implicit all the way through the standard. we also need to look at environmental issues, thinking about system testing, training, where it’s going to be deployed and maintenance at different levels. And we talk about potential locations for these things because often environmental issues are location sensitive. doing a particular task in the middle of nowhere in a desert, for example, might be completely harmless, doing it next to a significant watercourse, which is near a Ramsar Wetland (an environment of international importance) or an area of outstanding natural beauty or a national park, something like that, might have very different implications. it’s always location-sensitive with environmental stuff.

h) And being an American publication, it goes on to give a specific example: The National Environmental Policy Act (NEPA), which is in the U.S. and then similarly there is an executive order looking at actions by the federal government when abroad and how the federal government should manage that. Now, those are U.S. examples. If you’re not in the U.S. there’s probably a local equivalent of these things. I live and work in Australia, where we have an Australian Environmental Protection and Biodiversity Conservation (EPBC) Act. It doesn’t just apply in Australia, it also applies to what the Commonwealth Government does abroad as well. outside the normal Australian jurisdiction, it does apply.

i) And then finally, we’ve got to think about disposing of the kit. Demilitarisation: maybe we’re going to take out the old military stuff and flog it to somebody, we need to think about the safety and environmental impacts of doing that. Or maybe we’re just going to dispose of the kit, whatever it might be, we’re going to scrap it or destroy it or put it away somewhere, store it again in the desert somewhere for a rainy day. If that’s not a contradiction in terms. we’re going to think about the disposal of it as well and what are the safety and environmental implications of doing so? there’s a good, broad checklist here to help us think about different issues.

Recording Results

It says the, whoever is doing this stuff, the contractor, shall document identified hazards in this hazard tracking system, in this hazard log, this risk register, whatever you want to call it. And the content of this recording and the formats to be used have got to be agreed between, it says the contractor and the program office, but generally the purchaser and whoever is doing the work. the purchaser might also be the ultimate end-user, as is often the case with the government, or it might be something else. Again, it might be the purchaser will sell on to an end-user, but they’ve got to agree what they’re going to do with the contractor.

And of course, doing so, you’ve got to understand what your legal obligations are. Again, for example, in Australia, the WHS Act puts particular obligations on designers, manufacturers, suppliers, importers, etc. There are three duties and two of them are associated with passing on information to the end-user. be aware of what your obligations are, the kind of information that at minimum you must provide and probably make sure that you’re going to get that minimum information in a usable format and maybe some other stuff as well that you might need. And it says unless specified elsewhere, in other words, by agreement with the government or whoever is the purchaser, you’ve got to have a brief description of the hazard and the causal factors associated with each identified hazard.

Now this is beginning to get away from just a pure list, isn’t it? it’s not just a list, we have to have a description that we can scope out the hazard that we’re talking about. Bear in mind, early on we might identify a lot of hazards that subsequently actually turn out to be just one hazard or are not applicable or are covered by something else. we need a description that allows us to understand the boundaries of what we’re talking about. And then we’re also being asked to identify causes or causal factors. maybe circumstances, what could cause these things, etc. it’s a little bit more than just a list, but we’re beginning to fill in the fields in the hazard log as we do this at the start.

Contracting

Now, this is very useful, in the standard for every task it says here are the details to be specified in the contractual documentation, and notice it says details to be specified in the Request for Proposal. you’ve got to ask for this stuff if you need it. You’ve got to know that you need it and why you need it and what you’re going to do with the information as purchaser. And you’ve got to put that in right at the start in the Request for Proposal and the Statement of Work. And here’s some guidance on what to include.

The big point here is this needs to be done very early on. In fact, to be honest, the purchaser is going to have to do Task 201 themselves and maybe some other tasks in order to get enough data and enough understanding to write the Request for Proposal and the Statement of Work in the first place. you do it yourself and then maybe you do a quick job to inform your contracting strategy and what you’re going to do and then you get the contractor to do it as well.

What have we got to include? Well, we’ve got to impose Task 201. I’ve seen lots of contracts where they just say, ah, do safety, do safety in accordance with this standard, do Mil-Std 882 or whatever it might be. And a very broad open-ended statement like that is vulnerable to interpretation because what your contractors, your tenderers, will do is in order to come in at the minimum price and try and be competitive is they will tailor the Mil-Std and they will chop out things that they think are unnecessary, or that they can get away without doing and they might chop out some stuff that actually you find that you need. that can cause problems. But also even worse, if you’ve got a contractor who doesn’t understand how to do system safety engineering, who doesn’t understand Mil-Std 882, they might just blindly say, oh, yeah we’ll do that, and the classic mistake is you get in the contract, it says do Mil-Std 882E and here are all the DIDs, data item descriptors which describe what’s got to be in the various documents that the contractor has to provide. And of course, government projects love having lots of documentation, whether it’s actually helpful or not.

But the danger with this is this can mislead the contractor because if they don’t understand what a system safety program is, they might just go, I’ve got to produce all these documents, yeah, I can do that and not actually realise that they’ve got to do quite a lot of analysis work in order to generate the content for those reports. And I know that sounds daft, but it does happen, I’ve seen it again and again. You got a contractor who produces these reports that on paper have met the requirements of the DID because it’s got all the right headings, it’s got all the right columns or whatever else. But it’s full of garbage information or TBD or stuff that is obviously rubbish. And you think, no, no, you actually have to specify, you need to do the task and the documentation is the result of the task. we don’t want the tail wagging the dog. Anyway, I’ll get off my soapbox. You’ve got to impose the task, it’s a job to be done, not just a piece of paper to be produced.

Identification of the functional disciplines to be addressed. who’s going to be involved? What are you including? Are you including engineering, maintenance, human factors? Who’s got to be involved? Ideally, you want quite a wide involvement, you want lots of stakeholders, which you need to think about.

Guidance on obtaining access to government information. Now, whether it’s the government or whoever the purchaser is, it doesn’t have to be a government, getting a hold of information and guidance out of the purchaser can be very difficult. And very often that’s because the purchaser hasn’t done their homework. They haven’t worked out what information they will need to provide because maybe they don’t understand the demands of the task or they’ve just not thought it through, quite frankly. And the contractor or whoever is trying to do the analysis finds that they are hamstrung, they can’t actually do the work without information being provided by the purchaser.

And that means the contractor can’t do the work, and then they just pass the risk straight back to the government, back to the purchaser and say: I need this stuff. And then the purchaser ends up having to generate information very quickly at short notice, which is never good, you never get a quality result doing that. And often my job as a consultant is I get called in by the purchaser as often as I do by the supplier to say help, we don’t know what’s going on here, the contractor has said I can’t do the safety program without this information and I don’t understand what they want or what to tell them. as a consultant, I find myself spending a lot of time providing this kind of expertise because either the purchase or contractor doesn’t understand their obligations and hasn’t fulfilled them. Which is great for me, my firm gets paid a lot of money. It’s not good for the safety program.

Content and format requirements. Yes, we need to specify the content that we need. I say need not want. What are we going to do with this stuff? If we’re not going to do anything with it, do we actually need it at all? And what’s the format requirements? Because maybe we need to take information from lots of different subcontractors and put it all together in a consistent risk register. if it comes in all different formats, that’s going to make a lot more work and it may even make merging the information impossible. we need to think about that.

Now, what’s the concept of operations? We’ll come back to that in later tasks. But the concept of operations is, what are we going to do with this system? that should provide the operating environment. It should provide an overview of some basic requirements, maybe how the system will interface with other systems, how it will interact, concepts of operation deployment basing and maintenance. And maybe they’re only assumptions at this stage, but the people doing the analysis will need this stuff. You recall the environmental stuff is very location sensitive, we need a stab at where these things will happen and we need to understand what the system is going to be used for because in safety, context is everything. A system that might be perfectly safe in one context, if it’s being used not for what it was originally designed for or conceived for, can become very dangerous without anybody realising.

Other specific hazard management requirements. What definitions are we using? Very important because again, it’s very easy to get different information that’s being generated against different definitions by different contractors. And then it’s utter confusion. Can we compare like with like, or can’t we? What risk matrix are we going to use on this program? What normally happens on 882 programs is people just take the risk matrix out of the standard and use it without changing it. Now, that might be appropriate in certain circumstances, but it isn’t always. But I’m going to I’m going to talk about that, that’s a very complex, high-level management issue and I’m going to be talking about that in a separate issue about how do we actually derive a suitable risk matrix for our purposes and why we should do so. Because the use of an unsuitable matrix can cause all sorts of problems downstream, both conceptual problems in the way that we think about stuff and lower levels, sort of mechanistic problems. But I don’t have time to go into that here.

Then references and sources of hazard identification. This is another reason why the purchaser needs to have done their homework. Maybe we want the contractor or whoever is doing the analysis to look at particular sources of information that we consider to be relevant and necessary to consider. we need to specify that and understand what they are. And usually, we need to understand why we want them as well.

Commentary

That’s what was in the standard, as you see it’s very short, is only a page and a half in the standard and it is quite a light, high-level definition of the task because it’s an early task. Now let’s add some value here. Task 201 goes talks all about historical data. However, that is not the only way to do preliminary hazard identification. There are in fact two other classic methods to do PHI. One is the use of hazard checklists and you can also use some simple analysis techniques. And we need to remember that this is preliminary hazard identification, we’re doing this early and often to identify as many hazards as possible to find those hazards and the associated causes, consequences, maybe some controls as well. we’re trying to find stuff, not dismiss it or close out the hazards. And again, I’ve seen projects where I’ve read a preliminary hazard identification report and it says, we closed 50 hazards, and I think, no, you didn’t, you weren’t supposed to close anything because this is preliminary hazard identification. You identify stuff and then it gets further analysed. And if upon analysis, you discover actually this hazard is not relevant, it cannot possibly happen, then, and only then, can you close it. let’s remember, this is preliminary hazard ID.

Commentary – Historical Data

First of all, let’s look at historical data. And first of all some issues with using this historical data, availability. Can we actually get hold of it? Now, it may be that you work for a big corporate or government organisation that for whatever reason has good record keeping and you’ve got lots and lots of internal data that is of good quality that you’re allowed to access and that you know about and you can find or discover. If you are one of those people who are very, very lucky, you are in a minority, in my experience. If you’ve got all that stuff, fantastic, use it. But if you haven’t or if the information is of poor quality or people won’t give you access for whatever reason. And there are all sorts of reasons why people want to conceal information, they’re frightened of what people may discover, especially safety engineers. You may have to go out to external sources.

Now, the good news is that in the age of the Internet, getting hold of external data is extremely easy. There are lots of potential sources of data out there, and it may range from stuff on Wikipedia, public reporting of accidents and incidents by regulators or by trade associations or by learned societies that study these things or by academics or by consultancy such as the one that I work for. There are all kinds of potential sources of information out there that might be relevant to what you’re doing. And even if you’ve got good internal information, it’s probably worth searching out there for what’s external as a due diligence exercise, if nothing else, just to show that you haven’t just looked inwardly, that you’ve actually looked outwardly the rest of the world. There are lots of good sources of information out there. And depending on what industry you’re in, what domain you work in, you will probably know some of the things that are relevant in your area.

Now, just because data is available doesn’t mean that it’s reliable. It might be vague or inconsistent. We’ll come onto that later. It might be patchy. It’s usual for incidents to be underreported, especially minor incidents. you will find often that the stuff that gets reported is only the more serious stuff, and you should really assume that there has been under-reporting unless you’ve got a good reason not to. But to be honest, underreporting is the norm almost everywhere. there’s the issue of reliability, the data that you’ve got will be incomplete.

Secondly, another big issue is consistency. People might be reporting mishaps or incidents or accidents or events or occurrences. They might be using all sorts of different terminology to describe stuff that may or may not be relevant to what you’re talking about. And there’s lots of information out there, but actually, how has it been classified? Is it consistent? Can you compare all these different sources of information? And that can be quite tricky. And very often because of inconsistencies in the definition of a serious injury, for example, you may find that all you can actually compare with confidence are fatalities, because it’s difficult to interpret death in different ways. as a safety engineer, frequently I find myself I start with fatal accidents, if there are any, because those can’t be misinterpreted. And then you start looking at serious injuries, minor injuries, incidents where no one gets hurt, but somebody could have been. There are all sorts of pitfalls with the consistency of the data that you might get a hold of.

There’s relevance. It may be that you’re looking at data from a system that superficially looks similar to yours, but with a bit of digging, you may discover that although the system was similar, it’s being used in a completely different context and therefore there are significant differences in the reporting and what you’re seeing. there may be data that is out there, but just not relevant for whatever reason.

And finally, objectivity. Now, this is a two-way street. Historical data is fantastic for objectivity because it stops people saying subjectively, this couldn’t possibly happen. And I’ve heard this many times, you come up with something and somebody said, oh that couldn’t possibly happen, and then you show them the historical evidence that says, well it’s happened many times already and then they have to eat their words. historical data is fantastic for keeping things objective, provided of course, that it’s available, reliable, consistent and relevant. you’ve got to do a bit of work to make sure that you’re getting good data, but if you can, it’s absolutely worth its weight in gold, not just for Hazard I.D., but for torpedoing some of the stupid things that people come out with when they’re trying to stop you doing your job for whatever reason. historical data is great for shooting down prejudice is basically what I’m saying. reality always wins. That’s true in safety in the real world and in safety analysis.

Having said all that, what’s the applicability of historical data? It may be that really we can only use it for preliminary hazard identification and analysis. (I’ve just noticed I’ve got preliminary hazard identification and analysis.) Sometimes I see contractors try to use historical data to say, that’s the totality of my safety argument, my kit is wonderful, it never goes wrong and therefore it will never go wrong, that’s the totality of my safety argument. And that never works, because when you start trying to use historical data as the complete safety argument, you very quickly come up against these problems of availability, reliability, consistency and relevance.

It’s almost impossible to argue that a future system will be safe purely because it’s never gone wrong in the past. And in fact, trying to make such claims as, it’s never gone wrong, we’ve never had a problem, we’ve never had an incident, straight away that would suggest to me that they don’t have a very good incident reporting system or that they’ve just conveniently ignored the information they do have and not that people selling things ever do anything like that? Of course, no, never. There’s a lot of used car salesman out there. probably this use of historical data, we might have to keep it fairly limited. It might be usable for preliminary work only. And then we have to do the real work with analysis. But almost certainly it’s not going to be the whole answer on its own. do bear that in mind, historical data has its limits.

It’s also worth remembering that we get data from people as well. In Australia, the law requires managers to consult with workers in order to get this kind of information. No doubt in other countries there are similar obligations. there’s lots of people out there, potentially workers, management, suppliers and users, maintainers, regulators, trade associations, lots of people who might have relevant information. we really ought to consult them if we can. Sometimes that information is published, but other times we have to go and talk to people or get them to come to a preliminary hazard I.D. meeting in order to take part. There are lots of good ways of doing this stuff.

Commentary – Hazard Checklists

Let’s move on to hazard checklists. Checklists are great because someone else has done the work for you to a degree that it’s quick and cheap to get a checklist from somewhere and go through it to see if you can find anything that prompts you to go, yeah that could be an issue with my system. And the great thing about checklists is they broaden the scope of your hazard I.D. because if your historical data is a bit patchy or a bit inconsistent as it often is, it will identify some stuff, but not everything. the great thing about a checklist is very often broad and shallow, it really broadens the scope of the hazard I.D., it complements your historical data. I would always recommend having a go with a checklist.

Now, bear in mind that checklists tend to identify causes, you then have to use some imagination to go, okay, here’s a cause, how in the context of my system, how in the context of this concept of operations (very important), in this context, how could that cause lead to a hazard and maybe to a mishap? you need to apply some imagination with your checklist and it can be a good way of prompting a meeting of stakeholders to think about different issues because people will turn up with an axe to grind, they’ll have their favourite thing they want to talk about. Having a checklist keeps it objective or having historical data to review, keep it objective, and it keeps people on track that they don’t just go down a rabbit hole and never look at anything else.

But again, this is preliminary hazard identification only. if something comes up, I would advise you to take the position that it could happen unless we have evidence that it could not. And notice, I say evidence, not opinion. I’ve met plenty of people who will swear blind, that such and such could not possibly happen. A classic one that suckered me, somebody said no British pilot would ever be stupid enough to take off with that problem and like a fool, I believed them. So don’t listen to opinion, however convincing it is, unless there’s evidence to say it cannot happen, because it will. And in that case, it did two weeks later. don’t believe people when they say, oh that couldn’t possibly happen, it just shows a lack of imagination. Or they’ve got some vested interests and they’re trying to keep peace and keep you away from something.

It’s worth mentioning, in Australia at a minimum, we need to use the approach for Hazard I.D. that is in the WHS Risk Management Code of Practice. there’s some good basic advice in that code of practice on what to do to identify and analyse hazards and assess risks and manage them. We need to do it, at a minimum. It’s a good way to start, and in fact, there’s a bit of a hazard checklist in there as well. It’s not great, it’s workplace stuff mainly rather than design stuff or systems engineering stuff. But nevertheless, there’s some good stuff in there and that is the absolute bare minimum that we have to do in Australia. And there will probably be local equivalents wherever you are.

If you’re looking for a good example for a general checklist, if you look in, the UK’s ASEMS systems, which is the MOD acquisition safety and environmental management system, in POSMS, which is the project-oriented safety management system, there is a safety management procedure, SMP04, which is PHI. And that’s got a checklist in there. It’s aimed at sort of big equipment, military equipment, but there’s a lot of interesting stuff in there that you could apply to almost anything. If you look online, you’ll probably find lots of checklists, both general checklists and specialist checklists for your areas, maybe your trade association or whatever has a specialist checklist for the particular stuff, the thing that you do. always good to look up those things online, and see if you can access them and use them. And as I say, using multiple techniques helps us to ensure or have confidence that we’ve got fairly complete coverage, which is something that we’re going to need later on. And dependent on your regulator, you might have to demonstrate that you’ve done a thorough job, using multiple techniques is a good way of doing that. I’ve already said checklists nicely complement historical data because they’ve got different weaknesses.

Commentary – Analysis Technique

A third technique, which again takes a different approach, it complements the other two, is to use some kind of analysis technique to identify hazards. And there are lots of them out there. Again, I’m not going to go into them now in this session, I’m just going to give you one example, which is probably the simplest one I know, and therefore the most cost-effective. Probably it’s a good idea to do it as a desktop exercise and then get some stakeholders in and do it live with the stakeholders, either using what you’ve prepared or keep what you’ve prepared in your back pocket if you need to get things going, if people are stumped, they’re not sure what to do.

Now, this technique I’m just going to talk about is called functional failure analysis (FFA). And really all it does, you take a basic top-level function of whatever it is that you’re considering, you’ve got your concept of operations that says, I need a system to do X, Y, Z, you go, let’s look at X, Y and Z, and with each one of these functions, what happens if it doesn’t work when it’s supposed to work, or what happens if it works when I don’t want it to? That’s the un-commanded function or unwanted function, maybe. And then what if it happens, but it doesn’t happen completely correctly. What if it happens incorrectly? And there might be several different answers to that.

I’ll give you an example. Let’s assume that we were Mercedes Mr Mercedes, and you’re inventing the horseless carriage, you’re inventing the automobile, the car, and you say, this thing, it’s got a motor, I wanted it to start off, I want it to go and then I want to stop. those really, really simple conceptual ideas, I want it to go, or I want it to start moving. What happens if it doesn’t? Well, nothing actually, from a safety point of view. The driver might be a bit frustrated, but it’s not going to hurt anybody. An un-commanded function, what if it goes when it’s not supposed to? Now that’s bad. Or maybe the vehicle will roll away downhill when it’s not supposed to. We need a parking brake, in that case, we need a handbrake it doesn’t do that or use chocks or something or we restrain it.

Straight away, something as simple and simplistic as this, you can begin to identify issues and say, we need to do something about that. this is a really powerful technique, you get a lot of bangs per buck. And then, of course, we could go on with the example, it’s a trivial example, but you can see potentially how powerful it is providing you’re prepared to ask these open-ended questions and answer them imaginatively without closing your mind to different possibilities. there’s an example of analysis technique, and again, remember that this preliminary hazard ID. If we’ve identified something that could happen, then it could happen unless we have evidence that it could not.

Signing Off

I’ve talked for long enough, it just remains for me to point out that the quotations from Mil-Std are copyright free. But this video is copyright of The Safety Artisan 2020. And you can find more safety information, more lessons and more safety resources at my Safety Artisan page on Patreon and also at www.safetyartisan.com. I just want to say that’s the end of the lesson, thank you very much for listening and I hope you’ve found today’s session useful. Goodbye.

Back to the Home Page | Mil-Std-882 Page | System Safety Page

Professional | Pragmatic | Impartial

Mil-Std-882E Preliminary Hazard List (T201) & Analysis (T202)

This is Mil-Std-882E Preliminary Hazard List & Analysis.
Back to: 100-series Tasks.

The 200-series tasks fall into several natural groups. Tasks 201 and 202 address the generation of a Preliminary Hazard List and the conduct of Preliminary Hazard Analysis, respectively.

TASK 201 PRELIMINARY HAZARD LIST

201.1 Purpose. Task 201 is to compile a list of potential hazards early in development.

201.2 Task description. The contractor shall:

201.2.1 Examine the system shortly after the materiel solution analysis begins and compile a Preliminary Hazard List (PHL) identifying potential hazards inherent in the concept.

201.2.2 Review historical documentation on similar and legacy systems, including but not limited to:

  • a. Mishap and incident reports.
  • b. Hazard tracking systems.
  • c. Lessons learned.
  • d. Safety analyses and assessments.
  • e. Health hazard information.
  • f. Test documentation.
  • g. Environmental issues at potential locations for system testing, training, fielding/basing, and maintenance (organizational and depot).
  • h. Documentation associated with National Environmental Policy Act (NEPA) and Executive Order (EO) 12114, Environmental Effects Abroad of Major Federal Actions.
  • i. Demilitarization and disposal plans.

201.2.3 The contractor shall document identified hazards in the Hazard Tracking System (HTS). Contents and formats will be as agreed upon between the contractor and the Program Office. Unless otherwise specified in 201.3.d, minimum content shall included:

  • a. A brief description of the hazard.
  • b. The causal factor(s) for each identified hazard.

201.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:

  • a. Imposition of Task 201. (R)
  • b. Identification of functional discipline(s) to be addressed by this task. (R)
  • c. Guidance on obtaining access to Government documentation.
  • d. Content and format requirements for the PHL.
  • e. Concept of operations.
  • f. Other specific hazard management requirements, e.g., specific risk definitions and matrix to be used on this program.
  • g. References and sources of hazard identification.

TASK 202 PRELIMINARY HAZARD ANALYSIS

202.1 Purpose. Task 202 is to perform and document a Preliminary Hazard Analysis (PHA) to identify hazards, assess the initial risks, and identify potential mitigation measures.

202.2 Task description. The contractor shall perform and document a PHA to determine initial risk assessments of identified hazards. Hazards associated with the proposed design or function shall be evaluated for severity and probability based on the best available data, including mishap data (as accessible) from similar systems, legacy systems, and other lessons learned. Provisions, alternatives, and mitigation measures to eliminate hazards or reduce associated risk shall be included.

202.2.1 The contractor shall document the results of the PHA in the Hazard Tracking System (HTS).

202.2.2 The PHA shall identify hazards by considering the potential contribution to subsystem or system mishaps from:

  • a. System components.
  • b. Energy sources.
  • c. Ordnance.
  • d. Hazardous Materials (HAZMAT).
  • e. Interfaces and controls.
  • f. Interface considerations to other systems when in a network or System-of-Systems (SoS) architecture.
  • g. Material compatibilities.
  • h. Inadvertent activation.
  • i. Commercial-Off-the-Shelf (COTS), Government-Off-the-Shelf (GOTS), NonDevelopmental Items (NDIs), and Government-Furnished Equipment (GFE).
  • j. Software, including software developed by other contractors or sources. Design criteria to control safety-significant software commands and responses (e.g., inadvertent command, failure to command, untimely command or responses, and inappropriate magnitude) shall be identified, and appropriate action shall be taken to incorporate these into the software (and related hardware) specifications.
  • k. Operating environment and constraints.
  • l. Procedures for operating, test, maintenance, built-in-test, diagnostics, emergencies, explosive ordnance render-safe and emergency disposal.
  • m. Modes.
  • n. Health hazards.
  • o. Environmental impacts.
  • p. Human factors engineering and human error analysis of operator functions, tasks, and requirements.
  • q. Life support requirements and safety implications in manned systems, including crash safety, egress, rescue, survival, and salvage.
  • r. Event-unique hazards.
  • s. Built infrastructure, real property installed equipment, and support equipment.
  • t. Malfunctions of the SoS, system, subsystems, components, or software.

202.2.3 For each identified hazard, the PHA shall include an initial risk assessment. The definitions in Tables I and II, and the Risk Assessment Codes (RACs) in Table III shall be used, unless tailored alternative definitions and/or a tailored matrix are formally approved in accordance with Department of Defense (DoD) Component policy.

202.2.4 For each identified hazard, the PHA shall identify potential risk mitigation measures using the system safety design order of precedence specified in 4.3.4.

202.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:

  • a. Imposition of Task 202. (R)
  • b. Identification of functional discipline(s) to be addressed by this task. (R)
  • c. Special data elements, format, or data reporting requirements (consider Task 106, Hazard Tracking System).
  • d. Identification of hazards, hazardous areas, or other specific items to be examined or excluded.
  • e. Technical data on COTS, GOTS, NDIs, and GFE to enable the contractor to accomplish the defined task.
  • f. Concept of operations.
  • g. Other specific hazard management requirements, e.g., specific risk definitions and matrix to be used on this program.

Forward to the next excerpt: Task 203

Back to the Home Page | Mil-Std-882 Page | System Safety Page

Professional | Pragmatic | Impartial

Mil-Std-882E Appendix B

This is Mil-Std-882E Appendix B.
Back to Appendix A.

SOFTWARE SYSTEM SAFETY ENGINEERING AND ANALYSIS

B.1 Scope. This Appendix is not a mandatory part of the standard. The information contained herein is intended for guidance only. This Appendix provides additional guidance on the software system safety engineering and analysis requirements in 4.4. For more detailed guidance, refer to the Joint Software Systems Safety Engineering Handbook and Allied Ordnance Publication (AOP) 52, Guidance on Software Safety Design and Assessment of Munition-Related Computing Systems.

B.2. Software system safety. A successful software system safety engineering activity is based on a hazard analysis process, a safety-significant software development process, and Level of Rigor (LOR) tasks. The safety-significant software development process and LOR tasks comprise the software system safety integrity process. Emphasis is placed on the context of the “system” and how software contributes to or mitigates failures, hazards, and mishaps. From the perspective of the system safety engineer and the hazard analysis process, software is considered as a subsystem. In most instances, the system safety engineers will perform the hazard analysis process in conjunction with the software development, software test, and Independent Verification and Validation (IV&V) team(s). These teams will implement the safety-significant software development and LOR tasks as a part of the overall Software Development Plan (SDP). The hazard analysis process identifies and mitigates the exact software contributors to hazards. The software system safety integrity process increases the confidence that the software will perform as specified to software system safety and performance requirements while reducing the number of contributors to hazards that may exist in the system. Both processes are essential in reducing the likelihood of software initiating a propagation pathway to a hazardous condition or mishap.

B.2.1 Software system safety hazard analysis. System safety engineers performing the hazard analysis for the system (Preliminary Hazard Analysis (PHA), Subsystem Hazard Analysis (SSHA), System Hazard Analysis (SHA), System-of-Systems (SoS) Hazard Analysis, Functional Hazard Analysis (FHA), Operating and Support Hazard Analysis (O&SHA), and Health Hazard Analysis (HHA)) will ensure that the software system safety engineering analysis tasks are performed. These tasks ensure that software is considered in its contribution to mishap occurrence for the system under analysis, as well as interfacing systems within an SoS architecture. In general, software functionality that directly or indirectly contributes to mishaps, such as the processing of safety-significant data or the transitioning of the system to a state that could lead directly to a mishap, should be thoroughly analyzed. Software sources and specific software errors that cause or contribute to hazards should be identified at the software module and functional level (functions out-of-time or out-of-sequence malfunctions, degrades in function, or does not respond appropriately to system stimuli). In software-intensive, safety significant systems, mishap occurrence will likely be caused by a combination of hardware, software, and human errors. These complex initiation pathways should be analyzed and thoroughly tested to identify existing and/or derived mitigation requirements and constraints to the hardware and software design. As a part of the FHA (Task 208), identify software functionality which can cause, contribute to, or influence a safety-significant hazard. Software requirements that implement Safety-Significant Functions (SSFs) are also identified as safety significant.

B.2.2 Software system safety integrity. Software developers and testers play a major role in producing safe software. Their contribution can be enhanced by incorporating software system safety processes and requirements within the SDP and task activities. The software system safety processes and requirements are based on the identification and establishment of specific software development and test tasks for each acquisition phase of the software development life-cycle (requirements, preliminary design, detailed design, code, unit test, unit integration test, system integration test, and formal qualification testing). All software system safety tasks will be performed at the required LOR, based on the safety criticality of the software functions within each software configuration item or software module of code. The software system safety tasks are derived by performing an FHA to identify SSFs, assigning a Software Control Category (SCC) to each of the safety-significant software functions, assigning an Software Criticality Index (SwCI) based on severity and SCC, and implementing LOR tasks for safety-significant software based on the SwCI. These software system safety tasks are further explained in subsequent paragraphs.

B.2.2.1 Perform a functional hazard analysis. The SSFs of the system should be identified. Once identified, each SSF is assessed and categorized against the SCCs to determine the level of control of the software over safety-significant functionality. Each SSF is mapped to its implementing computer software configuration item or module of code for traceability purposes.

B.2.2.2 Perform a software criticality assessment for each SSF. The software criticality assessment should not be confused with risk. Risk is a measure of the severity and probability of occurrence of a mishap from a particular hazard, whereas software criticality is used to determine how critical a specified software function is with respect to the safety of the system. The software criticality is determined by analyzing the SSF in relation to the system and determining the level of control the software exercises over functionality and contribution to mishaps and hazards. The software criticality assessment combines the severity category with the SCC to derive a SwCI as defined in Table V in 4.4.2 of this Standard. The SwCI is then used as part of the software system safety analysis process to define the LOR tasks which specify the amount of analysis and testing required to assess the software contributions to the system-level risk.

B.2.2.3 Software Safety Criticality Matrix (SSCM) tailoring. Tables IV through VI should be used, unless tailored alternative matrices are formally approved in accordance with Department of Defense (DoD) Component policy. However, tailoring should result in a SSCM that meets or exceeds the LOR tasks defined in Table V in 4.4.2 of this Standard. A SwCI 1 from the SSCM implies that the assessed software function or requirement is highly critical to the safety of the system and requires more design, analysis, and test rigor than software that is less critical prior to being assessed in the context of risk reduction. Software with SwCI 2 through SwCI 4 typically requires progressively less design, analysis, and test rigor than high criticality software. Unlike the hardware-related risk index, a low index number does not imply that a design is unacceptable. Rather, it indicates a requirement to apply greater resources to the analysis and testing rigor of the software and its interaction with the system. The SSCM does not consider the likelihood of a software-caused mishap occurring in its initial assessment. However, through the successful implementation of a system and software system safety process and LOR tasks, the likelihood of software contributing to a mishap may be reduced.

B.2.2.4 Software system safety and requirements within software development processes. Once safety-significant software functions are identified, assessed against the SCC, and assigned a SwCI, the implementing software should be designed, coded, and tested against the approved SDP containing the software system safety requirements and LOR tasks. These criteria should be defined, negotiated, and documented in the SDP and the Software Test Plan (STP) early in the development life-cycle.

  • a. SwCI assignment. A SwCI should be assigned to each safety-significant software function and the associated safety-significant software requirements. Assigning the SwCI value of Not Safety to non-safety-significant software requirements provides a record that functionality has been assessed by software system safety engineering and deemed Not Safety. Individual safety-significant software requirements that track to the hazard reports will be assigned a SwCI. The intent of SwCI 4 is to ensure that requirements corresponding to this level are identified and tracked through the system. These “low” safety-significant requirements need only the defined safety-specific testing.
  • b. Task guidance. Guidance regarding tasks that can be placed in the SDP, STP, and safety program plans can be found in multiple references, including the Joint Software Systems Safety Engineering Handbook by the Joint Software Systems Safety Engineering Workgroup and AOP 52, Guidance on Software Safety Design and Assessment of Munition-Related Computing Systems. These tasks and others that may be identified should be based on each individual system or SoS and its complexity and safety criticality, as well as available resources, value added, and level of acceptable risk.

B.2.2.5. Software system safety requirements and tasks. Suggested software system safety requirements and tasks that can be applied to a program are listed in the following paragraphs for consideration and applicability:

  • a. Design requirements. Design requirements to consider include fault tolerant design, fault detection, fault isolation, fault annunciation, fault recovery, warnings, cautions, advisories, redundancy, independence, N-version design, functional partitioning (modules), physical partitioning (processors), design safety guidelines, generic software safety requirements, design safety standards, and best and common practices.
  • b. Process tasks. Process tasks to consider include design review, safety review, design walkthrough, code walkthrough, independent design review, independent code review, independent safety review, traceability of SSFs, SSFs code review, SSFs, Safety-Critical Function (SCF) code review, SCF design review, test case review, test procedure review, safety test result review, independent test results review, safety quality audit inspection, software quality assurance audit, and safety sign-off of reviews and documents.
  • c. Test tasks. Test task considerations include SSF testing, functional thread testing, limited regression testing, 100 percent regression testing, failure modes and effects testing, outof-bounds testing, safety-significant interface testing, Commercial-Off-the-Shelf (COTS), Government-Off-the-Shelf (GOTS), and Non-Developmental Item (NDI) input/output testing and verification, independent testing of prioritized SSFs, functional qualification testing, IV&V, and nuclear safety cross-check analysis.
  • d. Software system safety risk assessment. After completion of all specified software system safety engineering analysis, software development, and LOR tasks, results will be used as evidence (or input) to assign software’s contribution to the risk associated with a mishap. System safety and software system safety engineering, along with the software development team (and possibly the independent verification team), will evaluate the results of all safety verification activities and will perform an assessment of confidence for each safety-significant requirement and function. This information will be integrated into the program hazard analysis documentation and formal risk assessments. Insufficient evidence or evidence of inadequate software system safety program application should be assessed as risk.
  • (1) Figure B-1 illustrates the relationship between the software system safety activities (hazard analyses, software development, and LOR tasks), system hazards, and risk. Table B-I provides example criteria for determining risk levels associated with software.

FIGURE B-1. Assessing software’s contribution to risk

  • (2) The risks associated with system hazards that have software causes and controls may be acceptable based on evidence that hazards, causes, and mitigations have been identified, implemented, and verified in accordance with DoD customer requirements. The evidence supports the conclusion that hazard controls provide the required level of mitigation and the resultant risks can be accepted by the appropriate risk acceptance authority. In this regard, software is no different from hardware and operators. If the software design does not meet safety requirements, then there is a contribution to risk associated with inadequately verified software hazard causes and controls. Generally, risk assessment is based on quantitative and qualitative judgment and evidence. Table B-I shows how these principles can be applied to provide an assessment of risk associated with software causal factors.

TABLE B-I. Software hazard causal factor risk assessment criteria

  • e. Defining and following a process for assessing risk associated with hazards is critical to the success of a program, particularly as systems are combined into more complex SoS. These SoS often involve systems developed under disparate development and safety programs and may require interfaces with other Service (Army, Navy/Marines, and Air Force) or DoD agency systems. These other SoS stakeholders likely have their own safety processes for determining the acceptability of systems to interface with theirs. Ownership of the overarching system in these complex SoS can become difficult to determine. The process for assessing software’s contribution to risk, described in this Appendix, applies the same principals of risk mitigation used for other risk contributors (e.g., hardware and human). Therefore, this process may serve as a mechanism to achieve a “common ground” between SoS stakeholders on what constitutes an acceptable level of risk, the levels of mitigation required to achieve that acceptable level, and how each constituent system in the SoS contributes to, or supports mitigation of, the SoS hazards.

This is the last excerpt from the Standard

Back to the Home Page | Mil-Std-882 Page | System Safety Page

Professional | Pragmatic | Impartial

Safe Design – the Transcript

This is the transcript of the full video, which is available here.

Hello, everyone, and welcome to the Safety Artisan, where you will receive safety training via instructional videos on system safety, software safety and design safety. Today I’m talking about design safety. I’m Simon and I’m recording this on the 12th of January 2020, so our first recording of the new decade and let’s hope that we can give you some 20/20 vision. What we’re going to be talking about is safe design, and this safe design guidance comes from Safe Work Australia. I’m showing you some text taken from the website and adding my own commentary and experience.

Topics

The topics that we’re going to cover today are a safe design approach, five principles of safe design, ergonomics (more broadly, its human factors), who has responsibility, doing safe design through the product lifecycle, the benefits of it, our legal obligations in Australia (but this is good advice wherever you are) and the Australian approach to improving safe design in order to reduce casualties in the workplace.

Introduction

The idea of safe design is it’s about integrating safety management, asset identification and risk assessment early in the design process to eliminate or reduce risks throughout the life of a product,  whatever the product is, it might be a building, a structure, equipment, a vehicle or infrastructure. This is important because in Australia, in a five-year period, we suffered almost 640 work-related fatalities, of which almost 190 were caused by unsafe design or design-related factors contributed to that fatality., there’s an important reason to do this stuff, it’s not an academic exercise, we’re doing it for real reasons. And we’ll come back to the reason why we’re doing it at the end of the presentation.

A Safe Design Approach #1

First, we need to begin safe design right at the start of the lifecycle (we will see more of that later) because it’s at the beginning of the lifecycle where you’re making your bad decisions about requirements. What do you want this system to do? How do we design it to do that? What materials and components and subsystems are we going to make or buy in order to put this thing together, whatever it is? And thinking about how we are going to construct it, maintain it, operate it and then get rid of it at the end of life., there are lots of big decisions being made early in the life cycle. And sometimes these decisions are made accidentally because we don’t consciously think about what we’re doing. We just do stuff and then we realise afterwards that we’ve made a decision with sometimes quite serious implications.

A big part of my day job as a consultant is trying to help people think about those issues and make good decisions early on when it’s still cheap, quick and easy to do. Because, of course, the more you’ve invested into a project, the more difficult it is to make changes both from a financial point of view and if people have invested their time, sweat and tears into a project, they get very attached to it and they don’t want to change it. there’s an emotional investment made in the project. the earlier you get in, at the feasibility stage let’s say and think about all of this stuff the easier it is to do it. A big part of that is where is this kit going to end up? What legislation codes of practice and standards do we need to consider and comply with? So that’s the approach.

A Safe Design Approach #2

So, designers need to consider how safety can be achieved through the lifecycle. For example, can we design a machine with protective guarding so that the operator doesn’t get hurt using it, but also so the machine can be installed and maintained? That’s an important point as often to get at the stuff we must take it apart and maybe we must remove some of those safety features. How do we then protect and maintain when the machine is maybe opened up, and the workings are things that you can get caught in or electrocuted by. And how do we get rid of it? Maybe we’ve used some funky chemicals that are quite difficult to get rid of. And Australia, I suspect like many other places, we’ve got a mountain of old buildings that are full of asbestos, which is costing a gigantic sum of money to get rid of safely. we need to design a building which is fit for occupancy. Maybe we need to think about occupants that are not able-bodied or they’re moving stuff around in the building they don’t want to and need a trolley to carry stuff around. we need access, we need sufficient space to do whatever it is we need to do.

This all sounds simple, obvious, doesn’t it? So, let’s look at these five principles. First of all, a lot of this you’re going to recognise from the legal stuff because the principles of safe design are very much tied in and integrated with the Australian legal approach, WHS, which is all good, all consistent and all fits together.

5 Principles of Safe Design

Principle 1: Persons with control. If you’re making a decision that affects design and products, facilities or processes, it is your responsibility to think about safety, it’s part of your due diligence (If you recall that phrase and that session).

Principle 2: We need to apply safe design at every stage in the lifecycle, from the very beginning right through to the end. That means thinking about risks and eliminating or managing them as early as we can but thinking forward to the whole lifecycle; sounds easy, but it’s often done very badly.

Principle 3: Systematic risk management. We need to apply these things that we know about and listen to other broadcasts from The Safety Artisan. We go on and on and on about this because this is our bread and butter as safety engineers, as safety professionals – identify hazards, assess the risk and think about how we will control the risks in order to achieve a safe design.

Principal 4: Safe design, knowledge and capability. If you’re controlling the design, if you’re doing technical work or you’re managing it and making decisions, you must know enough about safe design and have the capability to put these principles into practice to the extent that you need to discharge your duties. When I’m thinking of duties, I’m especially thinking of the health and safety duties of officers, managers and people who make decisions. You need to exercise due diligence (see the Work Health and Safety lessons for more about due diligence).

Principle 5: Information transfer. Part of our duties is not just to do stuff well, but to pass on the information that the users, maintainers, disposers, etc will need in order to make effective use of the design safely. That is through all the lifecycle phases of the product.

So those are the five principles of safe design, and I think they’re all obvious really, aren’t they? So, let’s move on.

A Model for Safe Design

As the saying goes, is a picture is worth a thousand words. Here is the overview of the Safe Design Model, as they call it. We’ve got activities in a sequence running from top to bottom down the centre. Then on the left, we’ve got monitor and review, that is somebody in a management, or controlling function keeping an eye on things. On the right-hand side, we need to communicate and document what we’re doing. And of course, it’s not just documentation for documentation sake, we need to do this in order to fulfil our obligations to provide all the necessary information to users, etc. that’s the basic layout.

If we zoom in on the early stage, Pre-Design, we need to think about what problem are we trying to solve? What are we trying to do? What is the context for our enterprise? And that might be easy if you’re dealing with a physical thing. If you build a car, you make cars to be driven on the road. there’ll be a driver and maybe passengers in the car, there’ll be other road users around you, pedestrians, etc. with the physical system, it’s relatively easy with a bit of imagination and a bit of effort to think about who’s involved. But of course, not just use, but maintenance as well. Maybe it’s got to go into a garage for a service etc, how do we make things accessible for maintainers?

And then we move on to Concept Development. We might need to do some research, gather some information, think about previous systems or related systems and learn from them. We might have to consult with some people who are going to be affected, some stakeholders, who are going to be affected by this enterprise. We put all of that together and use it to help us identify hazards. Again, if we’re talking about a physical system, say if you make a new model of car, it’s probably not that different from the previous model of a car that you designed. But of course, every so often you do encounter something that is novel, that hasn’t been done before, or maybe you’re designing something that is virtual like software and software is intangible. With intangible things, it’s harder to do this. It requires a bit more effort and more imagination. It can be done, don’t be frightened of it but it does require a bit more forethought and a bit more effort and imagination than is perhaps the case with something simple and familiar like a car.

Moving on in the life cycle we have Design Options. We might think about several different solutions. We might generate some solutions; we might analyse and evaluate the risks of those options before selecting which option we’re going to go with. This doesn’t happen in reality very often, because often we’re designing something that’s familiar or people go, well actually I’m buying a bunch of kit off the shelf, (i.e. a bunch of components) and I’m just putting them together, there is no ‘optioneering’ to do.

That’s actually incorrect, because very often people do ‘optioneering’ by default, in that they buy the component that is cheap and readily available, but they don’t necessarily say, is the supplier going to provide the safety data that I need that goes along with this component? And maybe the more reputable supplier that does do that is going to charge you more. you need to think about where are you going to end up with all of this and evaluate your options accordingly. And of course, if you are making a system that is purely made from off-the-shelf components, there’s not a lot of design to do, there is just integration.

Well, that pushes all your design decisions and all your options much earlier on in the lifecycle, much higher up on the diagram as we see here. we are still making design options and design decisions, but maybe it’s just not as obvious. I’ve seen a lot of projects come unstuck because they just bought something that they like the look of, it appealed to the operators (if you live in an operator driven organisation, you’ll know what I mean). Some people buy stuff because they are magpies and it looks shiny, fun and funky! Then they buy it and people like me come along and start asking awkward questions about how are you going to demonstrate that this thing is safe to use and that you can put it in service? And then, of course, it doesn’t always end well if you don’t think about these things upfront.

So, moving on to Design Synthesis. We’ll select a solution, put stuff together, work on controlling the risks in the system that we are building. I know it says eliminating and control risks, and if you can eliminate risks then that’s great, but very often you can’t. we have to deal with the risks that we cannot get rid of and there are usually some risks in whatever you’re dealing with.

Then we get to Design Completion, where we implement the design, where we put it together and see if it does come together in the real world as we envisaged it. That doesn’t always happen. Then we have got to test it and see whether it does what it’s supposed to do. We’re normally pretty good at testing it for that, because if you set out requirements for what it’s supposed to do, then you’ve got something to test against. And of course, if you’re trying to sell a product or service or you’re trying to get regulators or customers to buy into this thing, it’s got to do what you said it’s going to do. there’s a big incentive to test the thing to make sure it does what it should do.

We’re not always so good at testing it to make sure that it doesn’t do what it shouldn’t do. That can be a bigger problem space depending on what you’re doing. And that is often the trick and that’s where safety people get involved. The Requirements Engineers, Systems Engineers are great at saying, yeah, here’s the requirements test against the requirements. And then it’s the safety people that come along and say, oh, by the way, you need to make sure that it doesn’t blow up, catch fire, get so hot that you can burn people. You need to eliminate the sharp edges. You need to make sure that people don’t get electrocuted when operating or maintaining this thing or disposing of it. You must make sure they don’t get poisoned by any chemicals that have been built into the thing. Even thinking about if I had an accident in the vehicle, or whatever it is that has been damaged or destroyed, and I’ve now got debris spread across the place, how do we clear that up? For some systems that can be a very challenging problem.

Ergonomics & Work Design

So, we’re going to move on now to a different subject, and a very important subject in safe design. I think this is one of the great things about safe design and good work design in Australia – that it incorporates ergonomics. We need to think about human interaction with the system, as well as the technical design itself, and I think that’s very important. It’s something that is very easy, especially for technical people, to miss. As engineers some of us love diving into the detail, that’s where we feel comfortable, that’s what we want to do, and then maybe we miss sometimes the big picture – somebody is actually going to use this thing and make it work. we need to think about all of our workers to make sure that they stay healthy and safe at work. We need to think about how they are going to physically interact with the system, etc. It may not be just the physical system that we’re designing, but of course, the work processes that go around it, which is important.

It is worth pointing out that in the UK I’m used to a narrow definition of ergonomics. I’m used to a definition of ergonomics that’s purely about the physical way that humans interact with the system. Can they do so safely and comfortably? Can they do repetitive tasks without getting injured? That includes anthropomorphic aspects, where we think about the variation in size of human beings, different sex’s, different races. Also, how do people fit in the machine or the vehicle or interact with it?

However, in Australia, the way we talk about ergonomics is it’s a much bigger picture than that. I would say don’t just think about ergonomics, think about human factors. It’s the science of people working. let’s understand human capabilities and apply that knowledge in the design of equipment and tools and systems and ways of working that we expect the human to use. Humans are pretty clever beasts in many ways and we’re still very good at things that a lot of machines are just not very good at. we need to design stuff which compliments the human being and helps the human being to succeed, rather than just optimising the technical design in isolation. And this quotation is from the ARPANSA definition because it was the best one that I could find in Australia. I will no doubt talk about human factors another time in some depth.

Responsibilities

Under the law, (this is tailored for Australian law, but a lot of this is still good principles that are applicable anyway) different groups and individuals have responsibilities for safe design. Those who manage the design and the technical stuff directly and those who make resourcing decisions. For example, we can think about building architects, industrial designers, etc., who create the design. Individuals who make design decisions at any lifecycle phase, that could be a wide range of people and of course not just technical people, but stakeholders who make decisions about how people are employed, how people are to interact with these systems, how they are to maintain it and dispose of it, etc. And of course, work health and safety professionals themselves. There’s a wide range of stakeholders involved here potentially.

Also, anybody who alters the design, and it may be that we’re talking about a physical alteration to design or maybe we’re just using a piece of kit in a different context. we’re using a machine or a process or piece of software that was designed to do X, and we’re actually using it to do Y, which is more common than you might think. if we are not putting an existing design in a different context, which was never envisaged by the original designers, we need to think about the implications of both the environment on the design and the design on the environment and the human beings mixed up working in both.

There’s a lot of accidents caused by modifying bits of kit, including you might say, a signature accident in the UK: the Flixborough Chemical Plant explosion. That was one of the things that led to the creation of modern Health and Safety law in the UK. It was caused by people modifying the design and not fully realising the implications of what they were doing. Of course, the result was a gigantic explosion and lots of dead bodies. Hopefully it won’t always be so dramatic with the things that we’re looking at, but nevertheless, people do ask designs to do some weird stuff.

If we want safe design, we can get it more effectively and more efficiently when people get together who control and influence outcomes and who make these decisions so that they collaborate on building safety into the design rather than trying to add on afterwards, which in my experience never goes well. We want to get people together, think about these things up front where it’s maybe a desktop exercise or it’s a meeting somewhere. It requires some people to attend the meeting and prepare for it and so on, and we need records, but that’s cheap compared to later design stages. When we’re talking about industrial plants or something that’s going to be mass-produced, making decisions later is always going to be more costly and less effective and therefore it’s going to be less popular and harder to justify. get in and do it early while you still can. There’s some good guidance on all this stuff on who is responsible.

There is the Principles of Good Work Design Handbook, which is created by Safe Work Australia and it’s also on the Safety Artisan Website (I gained permission to reproduce that) and there’s a model Code of Practice for the safe design of structures. There was going to be a model Code of Practice for the safe design of plants, but that never became a Code of Practice, that’s just guidance. Nevertheless, there is a lot of good stuff in there. And there’s the Work, Health and Safety Regulations.

And incidentally, there’s also a lot of good guidance on Major Hazard Facilities available. Major Hazard Facilities are anywhere where you store large amounts of potentially dangerous chemicals. However, the safety principles that are in the guidance for the MHF is very good and is generally applicable not just for chemical safety, but for any large undertaking where you could hurt a lot of people on that. MHF guidance I believe was originally inspired by the COMAH regulations in the UK, again which came from a major industrial disaster, Piper Alpha platform in the North Sea which caught fire and killed 167 people. It was a big fire. if you’ve got an enterprise where you could see a mass casualty situation, you’ll get a lot more guidance from the MHF stuff that’s really aimed at preventing industrial-sized accidents. there’s lots of good stuff available to help us.

Design for Plant

So, examples of things that we should consider. We need to, (and I don’t think this will be any great surprise to you) think about all phases of the life cycle, I think we banged on about that enough. Whether it be plant (waste plant in this case), whatever it might be, from design and manufacture or right through to disposal. Can we put the plant together? Can we erect it or what structure and we install it? Can we facilitate safe use again? Again, thinking about the physical characteristics of your users, but not just physical, think about the cognitive thinking of your users. If we’re making control system, can the users use it to practically exploit the plant for the purpose it was meant for whilst staying safe? What can the operator actually do, what can we expect them to perform successfully and reliably time after time because we want this stuff to keep on working for a long, long time, in order to make money or to do whatever it is we want to do. And we also need to think about the environment in which the plant will be used – very important.

Some more examples. Think about the intended use and reasonably foreseeable misuse. If you know that a piece of kit tends to get misused for certain things, then either design against it or give the operator a better way of doing it. A really strange example, apparently the designers of a particular assault rifle knew that the soldiers tended to use a bit of the rifle as a can opener or to do stuff like that or to open beer bottles, so they incorporated a bottle opener in the design so that the soldiers would use that rather than damage the assault rifle opening bottles of beer. A crazy example there but I think it’s memorable. we have to consider by law intended use, if you go to the W.H.S lesson, you’ll see that’s written right through the duties of everybody, reasonably foreseeable misuse I don’t think is a hard requirement in every case, but it’s still a sensible thing to do.

Think about the difficulties that workers might face doing repairs or maintenance? Again, sorry, I banged on about that, I came from a maintenance world originally, so I’m very used to those kinds of challenges. And consider what could go wrong. Again, we’re getting back into classic design safety here. Think about the failure modes of your plant. Well, ideally, we always wanted a fail-safe, but if we can’t do that, well, how can we warn people? How can we make sure we minimise the risk if something goes wrong and if a foreseeable hazard occurs? And by foreseeable, I’m not just saying, well we shut ourselves in a darkened room and we couldn’t think of anything, we need to look at real-world examples of similar pieces of kit. Look at real-world history, because there’s often an awful lot of learning out there that we can exploit, if only we bother to Google it or look it up in some way. As I think it was Bismarck, the great German leader said only a fool learns from his own mistakes, a wise man learns from other people’s mistakes. that’s what we try and do in safety.

Product Lifecycle

Moving onto lifecycle, this is a key concept. Again, I’ve gone on and on about this. We need to control risks, not just for use, but during construction and manufacture in transit, when it’s being commissioned and tested and used and operated when it’s being repaired, maintained, cleaned or modified. And then at the end of, I say the end of life, it may not be the end of life when it’s being decommissioned. maybe a decommissioning kit, moving it to a new site or maybe a new owner has bought it. we need to be able to safely take it upon move and put it back together again. And of course, by that stage, we may have lost the original packaging. we may have to think quite carefully about how we do this, or maybe we can’t fully disassemble it as we did during the original installation. maybe we’ve got to move an awkward bit of kit around. And then at the end of life, how are we going to dismantle it or demolish it? Are we going to dispose of it, or ideally recycle it? Hopefully, if we haven’t built in anything too nasty or too difficult to recycle, we can do that. that would be a good thing.

It’s worth reminding ourselves, we do get a safer product that is better for downstream users if we eliminate and minimise those hazards as early as we can. as I said before, in these early phases, there’s more scope in order to design out stuff without compromising the design, without putting limitations on what it can do. Whereas often when you’re adding safety in, so often that is achieved only at a cost in terms of it limits what the users can do or maybe you can’t run the plant at full capacity or whatever it might be, which is clearly undesirable. designers must have a good understanding of the lifecycle of their kit and so do those people who will interact with it and the environment in which it’s used. Again, if you’ve listened to me talking about our system safety concepts we hammer this point about it’s not just the plant it’s what you use it for, the people who will use it and the environment in which it is used. especially for complex things, we need to take all those things into account. And it’s not a trivial exercise to do this.

Then thirdly, as we go through the product life cycle, we may discover new risks, and this does happen. People make assumptions during the concept and design phase (and fair enough you must make assumptions sometimes in order to get anything done). But those assumptions don’t always turn out to be completely correct or something gets missed, we miss some aspect often. It’s the thing you didn’t anticipate that often gets you.

As we go through the lifecycle, we can further improve safety if people who have control over decisions and actions that are taken incorporate health and safety considerations at every stage and actually proactively look at whether we can make things better or whether something has occurred that we didn’t anticipate and therefore that needs to be looked into.

Another good principle that doesn’t always happen, we shouldn’t proceed to the next stage in the life cycle until we have completed our design reviews, we have thought about health and safety along with everything else, and those who have control have had a chance to consider everything together. And if they’re happy with it, to approve it and it moves on. it’s a very good illustration. Again, it will come as no surprise to many listeners there are a lot of projects out there that either don’t put in design reviews at all or you see design reviews being rushed. Lip service is paid to them very often because people forget the design reviews are there to review the design and to make sure it’s fit for purpose and safe and all the other good things, and we just get obsessed with getting through those design reviews, whether we’re the purchaser, whether we’re just keen to get on with the job and the schedule must be maintained at all costs.

Or if you’re the supplier, you want to get through those reviews because there’s a payment milestone attached to them. There’s a lot of temptation to rush these things. Often, rushing these things just results in more trouble further downstream. I know it takes a lot of guts, particularly early in a project to say, no: we’re not ready for this design review, we need to do some more work so that we can get through this properly. That’s a big call to make, often because not a lot of people are going to like you for making that call, but it does need to happen.

Benefits of Safe Design

So, let’s talk about the benefits. These are not my estimates, these are Safe Work Australia’s words, so they feel that from what they’ve seen in Australia and now surveying safety performance elsewhere, I suspect as well, that building safety into a plant can save you up to 10% of its cost. Whether it be through, an example here is reductions in holdings of hazardous materials, reduce the need for personal protective equipment, reduce need filled testing and maintenance, and that’s a good point. Very often we see large systems, large enterprises brought in to being without sufficient consideration of these things, and people think only about the capital costs of getting the kit into service. Now, if you’re spending millions or even possibly billions on a large infrastructure project, of course, you will focus on the upfront costs for that infrastructure. And of course, you are focused on getting that stuff into service as soon as possible so you can start earning money to pay for the capital costs of it.

But it’s also worth thinking about safety upfront. A lot of other design disciplines as well, of course, and making sure that you’re not building yourself a life cycle, a lifetime full of pain, doing maintenance and testing that, to be honest, you really don’t want to be doing, but because you didn’t design something out, you end up with no choice. And so, we can hopefully eliminate or minimise those direct costs with unsafe design, which can be considerable rework, compensation, insurance, environmental clean-up. You can be sued by the government for criminal transgressions and you can be sued by those who’ve been the relatives of the dead, the injured, the inconvenienced, those who’ve been moved off their land.

And these things will impact on parties downstream, not the designers. And in fact, often but not always, just the those who bought the product and used it. There’s a lot of incentive out there to minimise your liability and to get it right upfront and to be able to demonstrate they got it right upfront. Particularly if you’re a designer or a manufacturer and you’re worried that some of your users are maybe not as professionals and conscientious using your stuff as you would like because it’s still got your name and your company logo plastered all over it.

I don’t think there’s anything new in here. There are many benefits or we see the direct benefits. We’ve prevented injury and disease and that’s good. Not just your own, but other peoples. We can improve usability, very often if you improve safety through improving human factors and ergonomics, you’re going to get a more usable product that people like using, it is going to be more popular. Maybe you’ll sell more. You’ll improve productivity. those who are paying for the output are happy. You’ll reduce costs, not only reduce costs, (through life I’m talking about you might have to spend a little bit more money upfront), we can actually better predict and manage operations because we’re not having so many outages due to incidents or accidents.

Also, we can demonstrate compliance with legislation which will help you plug the kit in the first place, but also it is necessary if you’re going to get past a regulator or indeed if you don’t want to get sent to jail for contravening the WHS Act.

And benefits, well, innovation. I have to say innovation is a double-edged sword because some places love innovation, you’ll be very popular if you innovate. Other industries hate innovation and you will not be popular if you innovate. That last bullet, I’m not so sure it’s about innovation. Safety design, I don’t necessarily think it demands new thinking, it just demands thinking. Because most things that I’ve seen that are preventable, that have gone wrong and could have been stopped, it only required a little bit of thought and a little bit of imagination and a little bit of learning from the past, not ‘innovating’ the future.

Legal Obligations

So that brings us neatly on to think about our legal obligations. In Australia, and in other countries, there will be similar obligations, work, health and safety law impose duties on lots of people from designers, manufacturers, importers, suppliers, anybody who puts the stuff together, puts it up, modifies it, disposes of it. These obligations, as it says, will vary depending on the state or territory or whether Commonwealth WHS applies. But if it’s WHS, it’s all based on the model WHS from SafeWork Australia, so it will be very similar. In the WHS lesson, I talk about what these duties are and what you must do to meet them. You will be pleased to know that the guidance in safe design is in lockstep with those requirements. this is all good stuff, not because I’m saying it but because I’m actually showing you what’s come out of the statutory authority.

Yes, these obligations may vary, we talk about that quite a lot and in other sessions. Those who make decisions, and not just technical people, but those who control the finances, have duties under WHS law. Again, go and see the WHS lesson than the talks about the duties, particularly the duties of senior management officers and due diligence. And there are specific safety ‘due diligence’ requirements in WHS, which are very well written, very easy to read and understand. there’s no excuse for not looking at this stuff, it is very easy to see what you’re supposed to do and how to stay on the right side of the law. And it doesn’t matter whether you’re an employer, self-employed if you control a workplace or not, there are duties on designers upstream who will never go near the workplace that the kit is actually used in. if a client has some building or structure designed and built for leasing, they become the owner of the building and they may well retain health and safety duties for the lifetime of that building if it’s used as a workplace or to accommodate workers as well.

Recap

I just want to briefly recap on what we’ve heard. Safe design, I would say the big lesson that I’ve learned in my career is that safe design is not just a technical activity for the designers. I’ve worked in many organisations where the pedigree, the history of the organisation was that you had. technical risks were managed over here, and human or operational risks well managed over here, and there was a great a gulf between them, they never interacted very much. There was a sort of handover point where they would chuck the kit over the wall to the users and say, there, get on with it, and if you have an accident, it’s your fault because you’re stupid and you didn’t understand my piece of kit. And similarly got the operator saying all those technical people, they’ve got no idea how we use the kit or what we’re trying to do here, the big picture, they give us kit that is not suitable or that we have to misuse in order to get it to do the job.

So, if you have these two teams, players separately not interacting and not cooperating, it’s a mess. And certainly, in Australia, there are very explicit requirements in the law and regulations and the whole code of practice on consultation, communication and cooperation. These two units have got to come together, these two sides of the operation have got to come together in order to make the whole thing work. And WHS law does not differentiate between different types of risk. There is just risk to people, so you cannot hide behind the fact that, “well I do technical risk I don’t think about how people will use it”, you’ve just broken the law. You’ve got to think about the big picture, and you know, we can’t keep going on and on in our silos, our stovepipes.

That’s a little bit of a heart to heart, but that really, I think, is the value add from all of this. The great thing about this design guidance is that it encourages you to think through life, it encourages you to think about who is going to use it and it encourages you to think about the environment. And you can quite cheaply and quite quickly, you could make some dramatic improvements in safety by thinking about these things.

I’ve met a lot of technical people, who think that if a risk control measure isn’t technical, if it isn’t highly complicated and involves clever engineering, then some people have got a tendency to look down their nose at it. What we need to be doing is looking at how we reduce risk and what the benefits are in terms of risk reduction, and it might be a really very simple thing that seems almost trivial to a technical expert that actually delivers the safety, and that’s what we’ve got to think about not about having a clever technical solution necessarily. If we must have a clever technical solution to make it safe, well, so be it. But, we’d quite like to avoid that most of the time if we can.

Australian Approach

In Australia, in the 10 years to 2022, we have certain targets. we’ve got seven national action areas, and safe by design or safe design is one of them. As I’ve said several times, Australian legislation requires us to consult, cooperate and coordinate, so far as is reasonably practicable. And we need to work together rather than chuck problems over the wall to somebody else. You might think you delegated responsibility to somebody else, but actually, if you’re an officer of the person or conducting the business or undertaking, then you cannot ditch all of your responsibilities, so you need to think very carefully about what’s being done in your name because legally it can come back to you. you can’t just assume that somebody else is doing it and will do a reasonable job, it’s your duty to ensure that it is done, that you’ve provided the resources and that it is actually happening.

And so, what we want to do, in this 10-year period, is we want to achieve a real reduction, 30% reduction in serious injuries nationwide in that 10-year period and reduce work-related fatalities by at least a fifth. these are specific and valuable targets, they’re real-world targets. This is not an academic exercise, it’s about reducing the body count and the number of people who end up in a hospital, blinded or missing limbs or whatever. it’s important stuff. And as it says, SafeWork Australia and all the different regulators have been working together with industry unions and special interest groups in order to make this all happen. that’s all excellent stuff.

Safe Design – the End

And it just remains for me to say that most of the text that I’ve shown you is from the Safe Work Australia website, and that’s been reproduced under Creative Commons license. You can see the full details on the safetyartisan.com website. And just to point out that the words, this presentation itself are copyright of The Safety Artisan and I just realised I drafted this in 2019, it’s copyright 2020, but never mind, I started writing this in 2019.

Now, if you want more lessons on safety topics, please visit the Safety Artisan page at Patreon.com, and there are many more resources and the safety answers on the web site. And that is the end of the presentation, so thank you very much for listening and watching and from the safety artisan, I just say, I wish you a successful and safe 2020, goodbye.

[END]

Back to Safe Design Page | Back to Home Page

Professional | Pragmatic | Impartial

Safe Design in Australia

This post provides an overview of Safe Design in Australia. It has been edited from the Safe Work Australia webpage to remove some material.

The original webpage is © Commonwealth of Austr​alia, 2020; it is covered by a Creative Commons licence (CCBY 4.0) – for full details see here. Any additions are indicated [thus].

Introduction

Safe design is about integrating hazard identification and risk assessment methods early in the design process, to eliminate or minimise risks of injury throughout the life of a product. This applies to buildings, structures, equipment and vehicles.

Statistics and Research

  • Of 639 work-related fatalities from 2006­­ to 2011, one-third (188) were caused by unsafe design or design-related factors contributed to the fatality.
  • Of all fatalities where safe design was identified as an issue, one fifth (21%) was caused by inadequate protective guarding for workers.
  • 188 work-related fatalities from 2006-2011 were caused by unsafe design.
  • 21% of fatalities where safe design was identified as an issue were caused by inadequate guarding.
  • 73% of all design related fatalities were from agriculture, forestry and fishing, construction and manufacturing industries.

A safe design approach

Safe design begins at the concept development phase of a structure when you’re making decisions about:

  • the design and its intended purpose
  • materials to be used
  • possible methods of construction, maintenance, operation, demolition or dismantling and disposal
  • what legislation, codes of practice and standards need to be considered and complied with.

Designers need to consider how safety can best be achieved in each of the lifecycle phases, for example:

  • Designing a machine with protective guarding that will allow it to be operated safely, while also ensuring it can be installed, maintained and disposed of safely.
  • Designing a building with a lift for occupants, where the design also includes sufficient space and safe access to the lift well or machine room for maintenance work.

Five principles of safe design

  • Principle 1: Persons with control—those who make decisions affecting the design of products, facilities or processes are able to promote health and safety at the source.
  • Principle 2: Product lifecycle—safe design applies to every stage in the lifecycle from conception through to disposal. It involves eliminating hazards or minimising risks as early in the lifecycle as possible.
  • Principle 3: Systematic risk management—apply hazard identification, risk assessment and risk control processes to achieve safe design.
  • Principle 4: Safe design knowledge and capability—should be either demonstrated or acquired by those who control design.
  • Principle 5: Information transfer—effective communication and documentation of design and risk control information amongst everyone involved in the phases of the lifecycle is essential for the safe design approach.

These principles have been derived from Towards a Regulatory Regime for Safe Design [note that this is a 230-page document and somewhat outdated].  For more [useful] detail see Guidance on the principles of safe design for work.

Figure 1: A model for safe design

a model for safe design

Ergonomics and good work design

Safe design incorporates ergonomics principles as well as good work design.

  • Good work design helps ensure workplace hazards and risks are eliminated or minimised so all workers remain healthy and safe at work. It can involve the design of work, workstations, operational procedures, computer systems or manufacturing processes.

Responsibility for safe design

When it comes to achieving safe design, responsibility rests with those groups or individuals who control or manage design functions. This includes:

  • Architects, industrial designers or draftspersons who carry out the design on behalf of a client.
  • Individuals who make design decisions during any of the lifecycle phases such as engineers, manufacturers, suppliers, installers, builders, developers, project managers and WHS professionals.
  • Anyone who alters a design.
  • Building service designers or others designing fixed plant such as ventilation and electrical systems.
  • Buyers who specify the characteristics of products and materials such as masonry blocks and be default decide the weights bricklayers must handle.

Safe design can be achieved more effectively when all the parties who control and influence the design outcome collaborate on incorporating safety measures into the design.

For more information on who is responsible for safe design see Guidance on the principles of safe design for work, the Principles of Good Work Design Handbook and the model Code of Practice: Safe Design of Structures and WHS Regulations.

Design considerations for plant

Examples of things you should consider when designing plant include:

  • All the phases in the lifecycle of an item of plant from manufacture through use, to dismantling and disposal.
  • Design for safe erection and installation.
  • Design to facilitate safe use by considering, for example, the physical characteristics of users, the maximum number of tasks an operator can be expected to perform at any one time, the layout of the workstation or environment in which the plant may be used.
  • Consider intended use and reasonably foreseeable misuse.
  • Consider the difficulties workers may face when maintaining or repairing the plant.
  • Consider types of failure or malfunction and design the plant to fail in a safe manner.

Product lifecycle

The lifecycle of a product is a key concept of sustainable and safe design. It provides a framework for eliminating the hazards at the design stage and/or controlling the risk as the product is:

  • constructed or manufactured
  • imported, supplied or installed
  • commissioned, used or operated
  • maintained, repaired, cleaned, and/or modified
  • de-commissioned, demolished and/or dismantled
  • disposed of or recycled.

A safer product will be created if the hazards and risks that could impact on downstream users in the lifecycle are eliminated or controlled during design, manufacture or construction. In these early phases, there is greater scope to design-out hazards and/or incorporate risk control measures that are compatible with the original design concept and functional requirements of the product.

  • Designers must have a good understanding of the lifecycle of the item they are designing, including the needs of users and the environment in which that item may be used.

New risks may emerge as products are modified or the environments in which they are used change.

Safety can be further improved if each person who has control over actions taken in any of the lifecycle phases takes steps to ensure health and safety is pro-actively addressed, by reviewing the design and checking it meets safety standards in each of the lifecycle phases.

Subsequent stages of the product’s lifecycle should not go ahead until the preceding phase design reviews have been considered and approved by those with control.

Figure 2: Lifecycle of designed products 

Lifecycle of designed products

Benefits of safe design

It is estimated that inherently safe plant and equipment would save between 5–10% of their cost through reductions in inventories of hazardous materials, reduced need for protective equipment and the reduced costs of testing and maintaining the equipment.

  • The direct costs associated with unsafe design can be significant, for example retrofitting, workers’ compensation and insurance levies, environmental clean-up and negligence claims. Since these costs impact more on parties downstream in the lifecycle who buy and use the product, the incentive for these parties to influence and benefit from safe design is also greater.

A safe design approach results in many benefits including:

  • prevent injury and disease
  • improve useability of products, systems and facilities
  • improve productivity
  • reduce costs
  • better predict and manage production and operational costs over the lifecycle of a product
  • comply with legislation
  • innovate, in that safe design demands new thinking.

Australian WHS laws impose duties on a range of parties to ensure health and safety in relation to particular products such as:

  • designers of plant, buildings and structures
  • building owners and persons with control of workplaces
  • manufacturers, importers and suppliers of plant and substances
  • persons who install, erect or modify plant.

These obligations may vary depending on the relevant state, territory or Commonwealth WHS legislation.

Those who make decisions that influence design such as clients, chief financial officers, developers, builders, directors and managers will also have duties under WHS laws if they are employers, self-employed or if they manage or control workplaces.

  • For example, a client who has a building or structure designed and built for leasing becomes the owner of the building and may therefore have a duty as a person who manages or controls a workplace.

There are other provisions governing the design of buildings and structures in state and territory building laws. The BCA is the principal instrument for regulating architects, engineers and others involved in the design of buildings and structures.

  • Although the BCA provides minimum standards to ensure the health and safety of building occupants (such as structural adequacy, fire safety, amenities and ventilation), it does not cover the breadth of WHS matters that may arise during the construction phase or in the use of buildings and structures as workplaces.

In addition, there are technical design standards and guidelines produced by government agencies, Standards Australia and relevant professional bodies

Healthy and safe by design

This is one of the Seven action areas in the Australian Work Health and Safety Strategy 2012-2022.

Hazards are eliminated or minimised by design

The most effective and durable means of creating a healthy and safe working environment is to eliminate hazards and risks during the design of new plant, structures, substances and technology and of jobs, processes and systems. This design process needs to take into account hazards and risks that may be present at all stages of the lifecycle of structures, plant, products and substances.

Good design can eliminate or minimise the major physical, biomechanical and psychosocial hazards and risks associated with work. Effective design of the overall system of work will take into account, for example, management practices, work processes, schedules, tasks and workstation design.

Sustainable return to work or remaining at work while recovering from injury or illness is facilitated by good job design and management. Managers have an obligation to make reasonable adjustments to the design of the work and work processes to accommodate individuals’ differing capabilities.

Workers’ general health and wellbeing are strongly influenced by their health and safety at work. Well-designed work can improve worker health. Activities under the Australian Strategy build appropriate linkages with healthy worker programs to support improved general worker wellbeing as well as health and safety.

National activities support the following outcomes:

  • Structures, plant and substances are designed to eliminate or minimise hazards and risks before they are introduced into the workplace.
  • Work, work processes and systems of work are designed and managed to eliminate or minimise hazards and risks.

[END]

Back to Safe Design Page | Back to Home Page

Professional | Pragmatic | Impartial

Mil-Std-882E 400-Series Tasks

This is Mil-Std-882E 400-Series Tasks
Back to the previous excerpt: 300-Series Tasks [Link TBD]

The 400-series tasks fall into two groups. Task 401 covers Safety Verfication and it is surprisingly brief for such an important task. Tasks 402 and 403 are specialist tasks related to explosives, which provide explosive-specific requirements for hazard classification and explosive ordnance disposal, respectively.

TASK 401 SAFETY VERIFICATION

401.1 Purpose. Task 401 is to define and perform tests and demonstrations or use other verification methods on safety-significant hardware, software, and procedures to verify compliance with safety requirements.

401.2 Task description. The contractor shall define and perform analyses, tests, and demonstrations; develop models; and otherwise verify the compliance of the system with safety requirements on safety-significant hardware, software, and procedures (e.g., safety verification of iterative software builds, prototype systems, subsystems, and components). Induced or simulated failures shall be considered to demonstrate the acceptable safety performance of the equipment and software.

401.2.1 When analysis or inspection cannot determine the adequacy of risk mitigation measures, tests shall be specified and conducted to evaluate the overall effectiveness of the mitigation measures. Specific safety tests shall be integrated into appropriate system Test and Evaluation (T&E) plans, including verification and validation plans.

401.2.2 Where safety tests are not feasible, the contractor shall recommend verification of compliance using engineering analyses, analogies, laboratory tests, functional mockups, or models and simulations.

401.2.3 Review plans, procedures, and the results of tests and inspections to verify compliance with safety requirements.

401.2.4 The contractor shall document safety verification results and submit a report that includes the following:

  • a. Test procedures conducted to verify or demonstrate compliance with the safety requirements on safety-significant hardware, software, and procedures.
  • b. Results from engineering analyses, analogies, laboratory tests, functional mockups, or models and simulations used.
  • c. T&E reports that contain the results of the safety evaluations, with a summary of the results provided.

401.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:

  • a. Imposition of Task 401 (R)
  • b. Identification of functional discipline(s) to be addressed by this task. (R)
  • c. Other specific hazard management requirements, e.g., specific risk definitions and matrix to be used on this program.
  • d. Any special data elements, format, or data reporting requirements (consider Task 106, Hazard Tracking System).

TASK 402 EXPLOSIVES HAZARD CLASSIFICATION DATA

402.1 Purpose. Task 402 is to perform tests and analyses, develop data necessary to comply with hazard classification regulations, and prepare hazard classification approval documentation associated with the development or acquisition of new or modified explosives and packages or commodities containing explosives (including all energetics).

402.2 Task description. The contractor shall provide hazard classification data to support program compliance with the Department of Defense (DoD) Ammunition and Explosives Hazard Classification Procedures (DAEHCP) (Army Technical Bulletin 700-2, Naval Sea Systems Command Instruction 8020.8, Air Force Technical Order 11A-1-47, and Defense Logistics Agency Regulation 8220.1). Such pertinent data may include:

  • a. Narrative information to include functional descriptions, safety features, and similarities and differences to existing analogous explosive commodities, including packaging.
  • b. Technical data to include Department of Defense Identification Codes (DODICs) and National Stock Numbers (NSNs); part numbers; nomenclatures; lists of explosive compositions and their weights, whereabouts, and purposes; lists of other hazardous materials and their weights, volumes, and pressures; technical names; performance or product specifications; engineering drawings; and existing relevant Department of Transportation (DOT) classification of explosives approvals.
  • c. Storage and shipping configuration data to include packaging details.
  • d. Test plans.
  • e. Test reports.
  • f. Analyses.

402.3. Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:

  • a. Imposition of Task 402. (R)
  • b. Hazard classification data requirements to support the Integrated Master Schedule. (R)
  • c. Hazard classification data from similar legacy systems.
  • d. Any special data elements or formatting requirements.

TASK 403 EXPLOSIVE ORDNANCE DISPOSAL DATA

403.1 Purpose. Task 403 is to provide Explosive Ordnance Disposal (EOD) source data, recommended render-safe procedures, and disposal considerations. Task 403 also includes the provision of test items for use in new or modified weapons systems, explosive ordnance evaluations, aircraft systems, and unmanned systems.

403.2 Task description. The contractor shall:

  • a. Provide detailed source data on explosive ordnance design functioning and safety so that proper EOD tools, equipment, and procedures can be validated and verified.
  • b. Recommend courses of action that EOD personnel can take to render safe and dispose of explosive ordnance.
  • c. Provide test ordnance for conducting EOD validation and verification testing. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of assets required.
  • d. Provide training aids for conducting EOD training. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of training aids required.

403.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include, as applicable:

  • a. Imposition of Task 403. (R)
  • b. The number and types of test items for EOD validation and verification testing. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of assets required.
  • c. The number and types of training aids for EOD training. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of training aids required.

Forward to the next excerpt: Appendix A

Back to the Home Page | Mil-Std-882 Page | System Safety Page

Professional | Pragmatic | Impartial