This is Mil-Std-882E Appendix B. Back to Appendix A.
SOFTWARE SYSTEM SAFETY ENGINEERING AND ANALYSIS
B.1 Scope. This Appendix is not a mandatory part of the standard. The information contained herein is intended for guidance only. This Appendix provides additional guidance on the software system safety engineering and analysis requirements in 4.4. For more detailed guidance, refer to the Joint Software Systems Safety Engineering Handbook and Allied Ordnance Publication (AOP) 52, Guidance on Software Safety Design and Assessment of Munition-Related Computing Systems.
B.2. Software system safety. A successful software system safety engineering activity is based on a hazard analysis process, a safety-significant software development process, and Level of Rigor (LOR) tasks. The safety-significant software development process and LOR tasks comprise the software system safety integrity process. Emphasis is placed on the context of the “system” and how software contributes to or mitigates failures, hazards, and mishaps. From the perspective of the system safety engineer and the hazard analysis process, software is considered as a subsystem. In most instances, the system safety engineers will perform the hazard analysis process in conjunction with the software development, software test, and Independent Verification and Validation (IV&V) team(s). These teams will implement the safety-significant software development and LOR tasks as a part of the overall Software Development Plan (SDP). The hazard analysis process identifies and mitigates the exact software contributors to hazards. The software system safety integrity process increases the confidence that the software will perform as specified to software system safety and performance requirements while reducing the number of contributors to hazards that may exist in the system. Both processes are essential in reducing the likelihood of software initiating a propagation pathway to a hazardous condition or mishap.
B.2.1 Software system safety hazard analysis. System safety engineers performing the hazard analysis for the system (Preliminary Hazard Analysis (PHA), Subsystem Hazard Analysis (SSHA), System Hazard Analysis (SHA), System-of-Systems (SoS) Hazard Analysis, Functional Hazard Analysis (FHA), Operating and Support Hazard Analysis (O&SHA), and Health Hazard Analysis (HHA)) will ensure that the software system safety engineering analysis tasks are performed. These tasks ensure that software is considered in its contribution to mishap occurrence for the system under analysis, as well as interfacing systems within an SoS architecture. In general, software functionality that directly or indirectly contributes to mishaps, such as the processing of safety-significant data or the transitioning of the system to a state that could lead directly to a mishap, should be thoroughly analyzed. Software sources and specific software errors that cause or contribute to hazards should be identified at the software module and functional level (functions out-of-time or out-of-sequence malfunctions, degrades in function, or does not respond appropriately to system stimuli). In software-intensive, safety significant systems, mishap occurrence will likely be caused by a combination of hardware, software, and human errors. These complex initiation pathways should be analyzed and thoroughly tested to identify existing and/or derived mitigation requirements and constraints to the hardware and software design. As a part of the FHA (Task 208), identify software functionality which can cause, contribute to, or influence a safety-significant hazard. Software requirements that implement Safety-Significant Functions (SSFs) are also identified as safety significant.
B.2.2 Software system safety integrity. Software developers and testers play a major role in producing safe software. Their contribution can be enhanced by incorporating software system safety processes and requirements within the SDP and task activities. The software system safety processes and requirements are based on the identification and establishment of specific software development and test tasks for each acquisition phase of the software development life-cycle (requirements, preliminary design, detailed design, code, unit test, unit integration test, system integration test, and formal qualification testing). All software system safety tasks will be performed at the required LOR, based on the safety criticality of the software functions within each software configuration item or software module of code. The software system safety tasks are derived by performing an FHA to identify SSFs, assigning a Software Control Category (SCC) to each of the safety-significant software functions, assigning an Software Criticality Index (SwCI) based on severity and SCC, and implementing LOR tasks for safety-significant software based on the SwCI. These software system safety tasks are further explained in subsequent paragraphs.
B.2.2.1 Perform a functional hazard analysis. The SSFs of the system should be identified. Once identified, each SSF is assessed and categorized against the SCCs to determine the level of control of the software over safety-significant functionality. Each SSF is mapped to its implementing computer software configuration item or module of code for traceability purposes.
B.2.2.2 Perform a software criticality assessment for each SSF. The software criticality assessment should not be confused with risk. Risk is a measure of the severity and probability of occurrence of a mishap from a particular hazard, whereas software criticality is used to determine how critical a specified software function is with respect to the safety of the system. The software criticality is determined by analyzing the SSF in relation to the system and determining the level of control the software exercises over functionality and contribution to mishaps and hazards. The software criticality assessment combines the severity category with the SCC to derive a SwCI as defined in Table V in 4.4.2 of this Standard. The SwCI is then used as part of the software system safety analysis process to define the LOR tasks which specify the amount of analysis and testing required to assess the software contributions to the system-level risk.
B.2.2.3 Software Safety Criticality Matrix (SSCM) tailoring. Tables IV through VI should be used, unless tailored alternative matrices are formally approved in accordance with Department of Defense (DoD) Component policy. However, tailoring should result in a SSCM that meets or exceeds the LOR tasks defined in Table V in 4.4.2 of this Standard. A SwCI 1 from the SSCM implies that the assessed software function or requirement is highly critical to the safety of the system and requires more design, analysis, and test rigor than software that is less critical prior to being assessed in the context of risk reduction. Software with SwCI 2 through SwCI 4 typically requires progressively less design, analysis, and test rigor than high criticality software. Unlike the hardware-related risk index, a low index number does not imply that a design is unacceptable. Rather, it indicates a requirement to apply greater resources to the analysis and testing rigor of the software and its interaction with the system. The SSCM does not consider the likelihood of a software-caused mishap occurring in its initial assessment. However, through the successful implementation of a system and software system safety process and LOR tasks, the likelihood of software contributing to a mishap may be reduced.
B.2.2.4 Software system safety and requirements within software development processes. Once safety-significant software functions are identified, assessed against the SCC, and assigned a SwCI, the implementing software should be designed, coded, and tested against the approved SDP containing the software system safety requirements and LOR tasks. These criteria should be defined, negotiated, and documented in the SDP and the Software Test Plan (STP) early in the development life-cycle.
a. SwCI assignment. A SwCI should be assigned to each safety-significant software function and the associated safety-significant software requirements. Assigning the SwCI value of Not Safety to non-safety-significant software requirements provides a record that functionality has been assessed by software system safety engineering and deemed Not Safety. Individual safety-significant software requirements that track to the hazard reports will be assigned a SwCI. The intent of SwCI 4 is to ensure that requirements corresponding to this level are identified and tracked through the system. These “low” safety-significant requirements need only the defined safety-specific testing.
b. Task guidance. Guidance regarding tasks that can be placed in the SDP, STP, and safety program plans can be found in multiple references, including the Joint Software Systems Safety Engineering Handbook by the Joint Software Systems Safety Engineering Workgroup and AOP 52, Guidance on Software Safety Design and Assessment of Munition-Related Computing Systems. These tasks and others that may be identified should be based on each individual system or SoS and its complexity and safety criticality, as well as available resources, value added, and level of acceptable risk.
B.2.2.5. Software system safety requirements and tasks. Suggested software system safety requirements and tasks that can be applied to a program are listed in the following paragraphs for consideration and applicability:
a. Design requirements. Design requirements to consider include fault tolerant design, fault detection, fault isolation, fault annunciation, fault recovery, warnings, cautions, advisories, redundancy, independence, N-version design, functional partitioning (modules), physical partitioning (processors), design safety guidelines, generic software safety requirements, design safety standards, and best and common practices.
b. Process tasks. Process tasks to consider include design review, safety review, design walkthrough, code walkthrough, independent design review, independent code review, independent safety review, traceability of SSFs, SSFs code review, SSFs, Safety-Critical Function (SCF) code review, SCF design review, test case review, test procedure review, safety test result review, independent test results review, safety quality audit inspection, software quality assurance audit, and safety sign-off of reviews and documents.
c. Test tasks. Test task considerations include SSF testing, functional thread testing, limited regression testing, 100 percent regression testing, failure modes and effects testing, outof-bounds testing, safety-significant interface testing, Commercial-Off-the-Shelf (COTS), Government-Off-the-Shelf (GOTS), and Non-Developmental Item (NDI) input/output testing and verification, independent testing of prioritized SSFs, functional qualification testing, IV&V, and nuclear safety cross-check analysis.
d. Software system safety risk assessment. After completion of all specified software system safety engineering analysis, software development, and LOR tasks, results will be used as evidence (or input) to assign software’s contribution to the risk associated with a mishap. System safety and software system safety engineering, along with the software development team (and possibly the independent verification team), will evaluate the results of all safety verification activities and will perform an assessment of confidence for each safety-significant requirement and function. This information will be integrated into the program hazard analysis documentation and formal risk assessments. Insufficient evidence or evidence of inadequate software system safety program application should be assessed as risk.
(1) Figure B-1 illustrates the relationship between the software system safety activities (hazard analyses, software development, and LOR tasks), system hazards, and risk. Table B-I provides example criteria for determining risk levels associated with software.
FIGURE B-1. Assessing software’s contribution to risk
(2) The risks associated with system hazards that have software causes and controls may be acceptable based on evidence that hazards, causes, and mitigations have been identified, implemented, and verified in accordance with DoD customer requirements. The evidence supports the conclusion that hazard controls provide the required level of mitigation and the resultant risks can be accepted by the appropriate risk acceptance authority. In this regard, software is no different from hardware and operators. If the software design does not meet safety requirements, then there is a contribution to risk associated with inadequately verified software hazard causes and controls. Generally, risk assessment is based on quantitative and qualitative judgment and evidence. Table B-I shows how these principles can be applied to provide an assessment of risk associated with software causal factors.
e. Defining and following a process for assessing risk associated with hazards is critical to the success of a program, particularly as systems are combined into more complex SoS. These SoS often involve systems developed under disparate development and safety programs and may require interfaces with other Service (Army, Navy/Marines, and Air Force) or DoD agency systems. These other SoS stakeholders likely have their own safety processes for determining the acceptability of systems to interface with theirs. Ownership of the overarching system in these complex SoS can become difficult to determine. The process for assessing software’s contribution to risk, described in this Appendix, applies the same principals of risk mitigation used for other risk contributors (e.g., hardware and human). Therefore, this process may serve as a mechanism to achieve a “common ground” between SoS stakeholders on what constitutes an acceptable level of risk, the levels of mitigation required to achieve that acceptable level, and how each constituent system in the SoS contributes to, or supports mitigation of, the SoS hazards.
Hello, everyone, and
welcome to the Safety Artisan, where you will receive safety training via
instructional videos on system safety, software safety and design safety. Today
I’m talking about design safety. I’m Simon and I’m recording this on the 12th
of January 2020, so our first recording of the new decade and let’s hope that
we can give you some 20/20 vision. What we’re going to be talking about is safe
design, and this safe design guidance comes from Safe Work Australia. I’m
showing you some text taken from the website and adding my own commentary and
The topics that we’re going to cover today are a safe design approach, five principles of safe design, ergonomics (more broadly, its human factors), who has responsibility, doing safe design through the product lifecycle, the benefits of it, our legal obligations in Australia (but this is good advice wherever you are) and the Australian approach to improving safe design in order to reduce casualties in the workplace.
The idea of safe design is it’s about integrating safety management, asset identification and risk assessment early in the design process to eliminate or reduce risks throughout the life of a product, whatever the product is, it might be a building, a structure, equipment, a vehicle or infrastructure. This is important because in Australia, in a five-year period, we suffered almost 640 work-related fatalities, of which almost 190 were caused by unsafe design or design-related factors contributed to that fatality., there’s an important reason to do this stuff, it’s not an academic exercise, we’re doing it for real reasons. And we’ll come back to the reason why we’re doing it at the end of the presentation.
A Safe Design Approach
First, we need to begin safe design right at the start of the lifecycle (we will see more of that later) because it’s at the beginning of the lifecycle where you’re making your bad decisions about requirements. What do you want this system to do? How do we design it to do that? What materials and components and subsystems are we going to make or buy in order to put this thing together, whatever it is? And thinking about how we are going to construct it, maintain it, operate it and then get rid of it at the end of life., there are lots of big decisions being made early in the life cycle. And sometimes these decisions are made accidentally because we don’t consciously think about what we’re doing. We just do stuff and then we realise afterwards that we’ve made a decision with sometimes quite serious implications.
A big part of my day job as a consultant is trying to help people think about those issues and make good decisions early on when it’s still cheap, quick and easy to do. Because, of course, the more you’ve invested into a project, the more difficult it is to make changes both from a financial point of view and if people have invested their time, sweat and tears into a project, they get very attached to it and they don’t want to change it. there’s an emotional investment made in the project. the earlier you get in, at the feasibility stage let’s say and think about all of this stuff the easier it is to do it. A big part of that is where is this kit going to end up? What legislation codes of practice and standards do we need to consider and comply with? So that’s the approach.
A Safe Design Approach
So, designers need to consider how safety can be achieved through the lifecycle. For example, can we design a machine with protective guarding so that the operator doesn’t get hurt using it, but also so the machine can be installed and maintained? That’s an important point as often to get at the stuff we must take it apart and maybe we must remove some of those safety features. How do we then protect and maintain when the machine is maybe opened up, and the workings are things that you can get caught in or electrocuted by. And how do we get rid of it? Maybe we’ve used some funky chemicals that are quite difficult to get rid of. And Australia, I suspect like many other places, we’ve got a mountain of old buildings that are full of asbestos, which is costing a gigantic sum of money to get rid of safely. we need to design a building which is fit for occupancy. Maybe we need to think about occupants that are not able-bodied or they’re moving stuff around in the building they don’t want to and need a trolley to carry stuff around. we need access, we need sufficient space to do whatever it is we need to do.
This all sounds simple, obvious, doesn’t it? So, let’s look at these five principles. First of all, a lot of this you’re going to recognise from the legal stuff because the principles of safe design are very much tied in and integrated with the Australian legal approach, WHS, which is all good, all consistent and all fits together.
5 Principles of Safe
Principle 1: Persons with control. If you’re making a decision that affects design and products, facilities or processes, it is your responsibility to think about safety, it’s part of your due diligence (If you recall that phrase and that session).
Principle 2: We need to apply safe design at every stage in the lifecycle, from the very beginning right through to the end. That means thinking about risks and eliminating or managing them as early as we can but thinking forward to the whole lifecycle; sounds easy, but it’s often done very badly.
Principle 3: Systematic risk management. We need to apply these things that we know about and listen to other broadcasts from The Safety Artisan. We go on and on and on about this because this is our bread and butter as safety engineers, as safety professionals – identify hazards, assess the risk and think about how we will control the risks in order to achieve a safe design.
Principal 4: Safe design, knowledge and capability. If you’re controlling the design, if you’re doing technical work or you’re managing it and making decisions, you must know enough about safe design and have the capability to put these principles into practice to the extent that you need to discharge your duties. When I’m thinking of duties, I’m especially thinking of the health and safety duties of officers, managers and people who make decisions. You need to exercise due diligence (see the Work Health and Safety lessons for more about due diligence).
Principle 5: Information transfer. Part of our duties is not just to do stuff well, but to pass on the information that the users, maintainers, disposers, etc will need in order to make effective use of the design safely. That is through all the lifecycle phases of the product.
So those are the five
principles of safe design, and I think they’re all obvious really, aren’t they?
So, let’s move on.
A Model for Safe Design
As the saying goes, is a picture is worth a thousand words. Here is the overview of the Safe Design Model, as they call it. We’ve got activities in a sequence running from top to bottom down the centre. Then on the left, we’ve got monitor and review, that is somebody in a management, or controlling function keeping an eye on things. On the right-hand side, we need to communicate and document what we’re doing. And of course, it’s not just documentation for documentation sake, we need to do this in order to fulfil our obligations to provide all the necessary information to users, etc. that’s the basic layout.
If we zoom in on the early
stage, Pre-Design, we need to think about what problem are we trying to solve?
What are we trying to do? What is the context for our enterprise? And that
might be easy if you’re dealing with a physical thing. If you build a car, you
make cars to be driven on the road. there’ll be a driver and maybe passengers
in the car, there’ll be other road users around you, pedestrians, etc. with the
physical system, it’s relatively easy with a bit of imagination and a bit of
effort to think about who’s involved. But of course, not just use, but
maintenance as well. Maybe it’s got to go into a garage for a service etc, how
do we make things accessible for maintainers?
And then we move on to Concept Development. We might need to do some research, gather some information, think about previous systems or related systems and learn from them. We might have to consult with some people who are going to be affected, some stakeholders, who are going to be affected by this enterprise. We put all of that together and use it to help us identify hazards. Again, if we’re talking about a physical system, say if you make a new model of car, it’s probably not that different from the previous model of a car that you designed. But of course, every so often you do encounter something that is novel, that hasn’t been done before, or maybe you’re designing something that is virtual like software and software is intangible. With intangible things, it’s harder to do this. It requires a bit more effort and more imagination. It can be done, don’t be frightened of it but it does require a bit more forethought and a bit more effort and imagination than is perhaps the case with something simple and familiar like a car.
Moving on in the life cycle we have Design Options. We might think about several different solutions. We might generate some solutions; we might analyse and evaluate the risks of those options before selecting which option we’re going to go with. This doesn’t happen in reality very often, because often we’re designing something that’s familiar or people go, well actually I’m buying a bunch of kit off the shelf, (i.e. a bunch of components) and I’m just putting them together, there is no ‘optioneering’ to do.
That’s actually incorrect, because very often people do ‘optioneering’ by default, in that they buy the component that is cheap and readily available, but they don’t necessarily say, is the supplier going to provide the safety data that I need that goes along with this component? And maybe the more reputable supplier that does do that is going to charge you more. you need to think about where are you going to end up with all of this and evaluate your options accordingly. And of course, if you are making a system that is purely made from off-the-shelf components, there’s not a lot of design to do, there is just integration.
Well, that pushes all your design decisions and all your options much earlier on in the lifecycle, much higher up on the diagram as we see here. we are still making design options and design decisions, but maybe it’s just not as obvious. I’ve seen a lot of projects come unstuck because they just bought something that they like the look of, it appealed to the operators (if you live in an operator driven organisation, you’ll know what I mean). Some people buy stuff because they are magpies and it looks shiny, fun and funky! Then they buy it and people like me come along and start asking awkward questions about how are you going to demonstrate that this thing is safe to use and that you can put it in service? And then, of course, it doesn’t always end well if you don’t think about these things upfront.
So, moving on to Design Synthesis. We’ll select a solution, put stuff together, work on controlling the risks in the system that we are building. I know it says eliminating and control risks, and if you can eliminate risks then that’s great, but very often you can’t. we have to deal with the risks that we cannot get rid of and there are usually some risks in whatever you’re dealing with.
Then we get to Design Completion, where we implement the design, where we put it together and see if it does come together in the real world as we envisaged it. That doesn’t always happen. Then we have got to test it and see whether it does what it’s supposed to do. We’re normally pretty good at testing it for that, because if you set out requirements for what it’s supposed to do, then you’ve got something to test against. And of course, if you’re trying to sell a product or service or you’re trying to get regulators or customers to buy into this thing, it’s got to do what you said it’s going to do. there’s a big incentive to test the thing to make sure it does what it should do.
We’re not always so good at testing it to make sure that it doesn’t do what it shouldn’t do. That can be a bigger problem space depending on what you’re doing. And that is often the trick and that’s where safety people get involved. The Requirements Engineers, Systems Engineers are great at saying, yeah, here’s the requirements test against the requirements. And then it’s the safety people that come along and say, oh, by the way, you need to make sure that it doesn’t blow up, catch fire, get so hot that you can burn people. You need to eliminate the sharp edges. You need to make sure that people don’t get electrocuted when operating or maintaining this thing or disposing of it. You must make sure they don’t get poisoned by any chemicals that have been built into the thing. Even thinking about if I had an accident in the vehicle, or whatever it is that has been damaged or destroyed, and I’ve now got debris spread across the place, how do we clear that up? For some systems that can be a very challenging problem.
Ergonomics & Work
So, we’re going to move on now to a different subject, and a very important subject in safe design. I think this is one of the great things about safe design and good work design in Australia – that it incorporates ergonomics. We need to think about human interaction with the system, as well as the technical design itself, and I think that’s very important. It’s something that is very easy, especially for technical people, to miss. As engineers some of us love diving into the detail, that’s where we feel comfortable, that’s what we want to do, and then maybe we miss sometimes the big picture – somebody is actually going to use this thing and make it work. we need to think about all of our workers to make sure that they stay healthy and safe at work. We need to think about how they are going to physically interact with the system, etc. It may not be just the physical system that we’re designing, but of course, the work processes that go around it, which is important.
It is worth pointing out that in the UK I’m used to a narrow definition of ergonomics. I’m used to a definition of ergonomics that’s purely about the physical way that humans interact with the system. Can they do so safely and comfortably? Can they do repetitive tasks without getting injured? That includes anthropomorphic aspects, where we think about the variation in size of human beings, different sex’s, different races. Also, how do people fit in the machine or the vehicle or interact with it?
However, in Australia, the way we talk about ergonomics is it’s a much bigger picture than that. I would say don’t just think about ergonomics, think about human factors. It’s the science of people working. let’s understand human capabilities and apply that knowledge in the design of equipment and tools and systems and ways of working that we expect the human to use. Humans are pretty clever beasts in many ways and we’re still very good at things that a lot of machines are just not very good at. we need to design stuff which compliments the human being and helps the human being to succeed, rather than just optimising the technical design in isolation. And this quotation is from the ARPANSA definition because it was the best one that I could find in Australia. I will no doubt talk about human factors another time in some depth.
Under the law, (this is tailored for Australian law, but a lot of this is still good principles that are applicable anyway) different groups and individuals have responsibilities for safe design. Those who manage the design and the technical stuff directly and those who make resourcing decisions. For example, we can think about building architects, industrial designers, etc., who create the design. Individuals who make design decisions at any lifecycle phase, that could be a wide range of people and of course not just technical people, but stakeholders who make decisions about how people are employed, how people are to interact with these systems, how they are to maintain it and dispose of it, etc. And of course, work health and safety professionals themselves. There’s a wide range of stakeholders involved here potentially.
Also, anybody who alters the design, and it may be that we’re talking about a physical alteration to design or maybe we’re just using a piece of kit in a different context. we’re using a machine or a process or piece of software that was designed to do X, and we’re actually using it to do Y, which is more common than you might think. if we are not putting an existing design in a different context, which was never envisaged by the original designers, we need to think about the implications of both the environment on the design and the design on the environment and the human beings mixed up working in both.
There’s a lot of accidents caused by modifying bits of kit, including you might say, a signature accident in the UK: the Flixborough Chemical Plant explosion. That was one of the things that led to the creation of modern Health and Safety law in the UK. It was caused by people modifying the design and not fully realising the implications of what they were doing. Of course, the result was a gigantic explosion and lots of dead bodies. Hopefully it won’t always be so dramatic with the things that we’re looking at, but nevertheless, people do ask designs to do some weird stuff.
If we want safe design, we can get it more effectively and more efficiently when people get together who control and influence outcomes and who make these decisions so that they collaborate on building safety into the design rather than trying to add on afterwards, which in my experience never goes well. We want to get people together, think about these things up front where it’s maybe a desktop exercise or it’s a meeting somewhere. It requires some people to attend the meeting and prepare for it and so on, and we need records, but that’s cheap compared to later design stages. When we’re talking about industrial plants or something that’s going to be mass-produced, making decisions later is always going to be more costly and less effective and therefore it’s going to be less popular and harder to justify. get in and do it early while you still can. There’s some good guidance on all this stuff on who is responsible.
There is the Principles of Good Work Design Handbook, which is created by Safe Work Australia and it’s also on the Safety Artisan Website (I gained permission to reproduce that) and there’s a model Code of Practice for the safe design of structures. There was going to be a model Code of Practice for the safe design of plants, but that never became a Code of Practice, that’s just guidance. Nevertheless, there is a lot of good stuff in there. And there’s the Work, Health and Safety Regulations.
And incidentally, there’s also a lot of good guidance on Major Hazard Facilities available. Major Hazard Facilities are anywhere where you store large amounts of potentially dangerous chemicals. However, the safety principles that are in the guidance for the MHF is very good and is generally applicable not just for chemical safety, but for any large undertaking where you could hurt a lot of people on that. MHF guidance I believe was originally inspired by the COMAH regulations in the UK, again which came from a major industrial disaster, Piper Alpha platform in the North Sea which caught fire and killed 167 people. It was a big fire. if you’ve got an enterprise where you could see a mass casualty situation, you’ll get a lot more guidance from the MHF stuff that’s really aimed at preventing industrial-sized accidents. there’s lots of good stuff available to help us.
Design for Plant
So, examples of things that we should consider. We need to, (and I don’t think this will be any great surprise to you) think about all phases of the life cycle, I think we banged on about that enough. Whether it be plant (waste plant in this case), whatever it might be, from design and manufacture or right through to disposal. Can we put the plant together? Can we erect it or what structure and we install it? Can we facilitate safe use again? Again, thinking about the physical characteristics of your users, but not just physical, think about the cognitive thinking of your users. If we’re making control system, can the users use it to practically exploit the plant for the purpose it was meant for whilst staying safe? What can the operator actually do, what can we expect them to perform successfully and reliably time after time because we want this stuff to keep on working for a long, long time, in order to make money or to do whatever it is we want to do. And we also need to think about the environment in which the plant will be used – very important.
Some more examples. Think about the intended use and reasonably foreseeable misuse. If you know that a piece of kit tends to get misused for certain things, then either design against it or give the operator a better way of doing it. A really strange example, apparently the designers of a particular assault rifle knew that the soldiers tended to use a bit of the rifle as a can opener or to do stuff like that or to open beer bottles, so they incorporated a bottle opener in the design so that the soldiers would use that rather than damage the assault rifle opening bottles of beer. A crazy example there but I think it’s memorable. we have to consider by law intended use, if you go to the W.H.S lesson, you’ll see that’s written right through the duties of everybody, reasonably foreseeable misuse I don’t think is a hard requirement in every case, but it’s still a sensible thing to do.
Think about the difficulties that workers might face doing repairs or maintenance? Again, sorry, I banged on about that, I came from a maintenance world originally, so I’m very used to those kinds of challenges. And consider what could go wrong. Again, we’re getting back into classic design safety here. Think about the failure modes of your plant. Well, ideally, we always wanted a fail-safe, but if we can’t do that, well, how can we warn people? How can we make sure we minimise the risk if something goes wrong and if a foreseeable hazard occurs? And by foreseeable, I’m not just saying, well we shut ourselves in a darkened room and we couldn’t think of anything, we need to look at real-world examples of similar pieces of kit. Look at real-world history, because there’s often an awful lot of learning out there that we can exploit, if only we bother to Google it or look it up in some way. As I think it was Bismarck, the great German leader said only a fool learns from his own mistakes, a wise man learns from other people’s mistakes. that’s what we try and do in safety.
Moving onto lifecycle, this is a key concept. Again, I’ve gone on and on about this. We need to control risks, not just for use, but during construction and manufacture in transit, when it’s being commissioned and tested and used and operated when it’s being repaired, maintained, cleaned or modified. And then at the end of, I say the end of life, it may not be the end of life when it’s being decommissioned. maybe a decommissioning kit, moving it to a new site or maybe a new owner has bought it. we need to be able to safely take it upon move and put it back together again. And of course, by that stage, we may have lost the original packaging. we may have to think quite carefully about how we do this, or maybe we can’t fully disassemble it as we did during the original installation. maybe we’ve got to move an awkward bit of kit around. And then at the end of life, how are we going to dismantle it or demolish it? Are we going to dispose of it, or ideally recycle it? Hopefully, if we haven’t built in anything too nasty or too difficult to recycle, we can do that. that would be a good thing.
It’s worth reminding
ourselves, we do get a safer product that is better for downstream users if we
eliminate and minimise those hazards as early as we can. as I said before, in
these early phases, there’s more scope in order to design out stuff without
compromising the design, without putting limitations on what it can do. Whereas
often when you’re adding safety in, so often that is achieved only at a cost in
terms of it limits what the users can do or maybe you can’t run the plant at
full capacity or whatever it might be, which is clearly undesirable. designers
must have a good understanding of the lifecycle of their kit and so do those
people who will interact with it and the environment in which it’s used. Again,
if you’ve listened to me talking about our system safety concepts we hammer
this point about it’s not just the plant it’s what you use it for, the people
who will use it and the environment in which it is used. especially for complex
things, we need to take all those things into account. And it’s not a trivial
exercise to do this.
Then thirdly, as we go through the product life cycle, we may discover new risks, and this does happen. People make assumptions during the concept and design phase (and fair enough you must make assumptions sometimes in order to get anything done). But those assumptions don’t always turn out to be completely correct or something gets missed, we miss some aspect often. It’s the thing you didn’t anticipate that often gets you.
As we go through the lifecycle, we can further improve safety if people who have control over decisions and actions that are taken incorporate health and safety considerations at every stage and actually proactively look at whether we can make things better or whether something has occurred that we didn’t anticipate and therefore that needs to be looked into.
Another good principle that doesn’t always happen, we shouldn’t proceed to the next stage in the life cycle until we have completed our design reviews, we have thought about health and safety along with everything else, and those who have control have had a chance to consider everything together. And if they’re happy with it, to approve it and it moves on. it’s a very good illustration. Again, it will come as no surprise to many listeners there are a lot of projects out there that either don’t put in design reviews at all or you see design reviews being rushed. Lip service is paid to them very often because people forget the design reviews are there to review the design and to make sure it’s fit for purpose and safe and all the other good things, and we just get obsessed with getting through those design reviews, whether we’re the purchaser, whether we’re just keen to get on with the job and the schedule must be maintained at all costs.
Or if you’re the supplier, you want to get through those reviews because there’s a payment milestone attached to them. There’s a lot of temptation to rush these things. Often, rushing these things just results in more trouble further downstream. I know it takes a lot of guts, particularly early in a project to say, no: we’re not ready for this design review, we need to do some more work so that we can get through this properly. That’s a big call to make, often because not a lot of people are going to like you for making that call, but it does need to happen.
Benefits of Safe Design
So, let’s talk about the benefits. These are not my estimates, these are Safe Work Australia’s words, so they feel that from what they’ve seen in Australia and now surveying safety performance elsewhere, I suspect as well, that building safety into a plant can save you up to 10% of its cost. Whether it be through, an example here is reductions in holdings of hazardous materials, reduce the need for personal protective equipment, reduce need filled testing and maintenance, and that’s a good point. Very often we see large systems, large enterprises brought in to being without sufficient consideration of these things, and people think only about the capital costs of getting the kit into service. Now, if you’re spending millions or even possibly billions on a large infrastructure project, of course, you will focus on the upfront costs for that infrastructure. And of course, you are focused on getting that stuff into service as soon as possible so you can start earning money to pay for the capital costs of it.
But it’s also worth thinking about safety upfront. A lot of other design disciplines as well, of course, and making sure that you’re not building yourself a life cycle, a lifetime full of pain, doing maintenance and testing that, to be honest, you really don’t want to be doing, but because you didn’t design something out, you end up with no choice. And so, we can hopefully eliminate or minimise those direct costs with unsafe design, which can be considerable rework, compensation, insurance, environmental clean-up. You can be sued by the government for criminal transgressions and you can be sued by those who’ve been the relatives of the dead, the injured, the inconvenienced, those who’ve been moved off their land.
And these things will impact on parties downstream, not the designers. And in fact, often but not always, just the those who bought the product and used it. There’s a lot of incentive out there to minimise your liability and to get it right upfront and to be able to demonstrate they got it right upfront. Particularly if you’re a designer or a manufacturer and you’re worried that some of your users are maybe not as professionals and conscientious using your stuff as you would like because it’s still got your name and your company logo plastered all over it.
I don’t think there’s anything new in here. There are many benefits or we see the direct benefits. We’ve prevented injury and disease and that’s good. Not just your own, but other peoples. We can improve usability, very often if you improve safety through improving human factors and ergonomics, you’re going to get a more usable product that people like using, it is going to be more popular. Maybe you’ll sell more. You’ll improve productivity. those who are paying for the output are happy. You’ll reduce costs, not only reduce costs, (through life I’m talking about you might have to spend a little bit more money upfront), we can actually better predict and manage operations because we’re not having so many outages due to incidents or accidents.
Also, we can demonstrate compliance with legislation which will help you plug the kit in the first place, but also it is necessary if you’re going to get past a regulator or indeed if you don’t want to get sent to jail for contravening the WHS Act.
And benefits, well, innovation. I have to say innovation is a double-edged sword because some places love innovation, you’ll be very popular if you innovate. Other industries hate innovation and you will not be popular if you innovate. That last bullet, I’m not so sure it’s about innovation. Safety design, I don’t necessarily think it demands new thinking, it just demands thinking. Because most things that I’ve seen that are preventable, that have gone wrong and could have been stopped, it only required a little bit of thought and a little bit of imagination and a little bit of learning from the past, not ‘innovating’ the future.
So that brings us neatly on to think about our legal obligations. In Australia, and in other countries, there will be similar obligations, work, health and safety law impose duties on lots of people from designers, manufacturers, importers, suppliers, anybody who puts the stuff together, puts it up, modifies it, disposes of it. These obligations, as it says, will vary depending on the state or territory or whether Commonwealth WHS applies. But if it’s WHS, it’s all based on the model WHS from SafeWork Australia, so it will be very similar. In the WHS lesson, I talk about what these duties are and what you must do to meet them. You will be pleased to know that the guidance in safe design is in lockstep with those requirements. this is all good stuff, not because I’m saying it but because I’m actually showing you what’s come out of the statutory authority.
Yes, these obligations may vary, we talk about that quite a lot and in other sessions. Those who make decisions, and not just technical people, but those who control the finances, have duties under WHS law. Again, go and see the WHS lesson than the talks about the duties, particularly the duties of senior management officers and due diligence. And there are specific safety ‘due diligence’ requirements in WHS, which are very well written, very easy to read and understand. there’s no excuse for not looking at this stuff, it is very easy to see what you’re supposed to do and how to stay on the right side of the law. And it doesn’t matter whether you’re an employer, self-employed if you control a workplace or not, there are duties on designers upstream who will never go near the workplace that the kit is actually used in. if a client has some building or structure designed and built for leasing, they become the owner of the building and they may well retain health and safety duties for the lifetime of that building if it’s used as a workplace or to accommodate workers as well.
I just want to briefly recap on what we’ve heard. Safe design, I would say the big lesson that I’ve learned in my career is that safe design is not just a technical activity for the designers. I’ve worked in many organisations where the pedigree, the history of the organisation was that you had. technical risks were managed over here, and human or operational risks well managed over here, and there was a great a gulf between them, they never interacted very much. There was a sort of handover point where they would chuck the kit over the wall to the users and say, there, get on with it, and if you have an accident, it’s your fault because you’re stupid and you didn’t understand my piece of kit. And similarly got the operator saying all those technical people, they’ve got no idea how we use the kit or what we’re trying to do here, the big picture, they give us kit that is not suitable or that we have to misuse in order to get it to do the job.
So, if you have these two teams, players separately not interacting and not cooperating, it’s a mess. And certainly, in Australia, there are very explicit requirements in the law and regulations and the whole code of practice on consultation, communication and cooperation. These two units have got to come together, these two sides of the operation have got to come together in order to make the whole thing work. And WHS law does not differentiate between different types of risk. There is just risk to people, so you cannot hide behind the fact that, “well I do technical risk I don’t think about how people will use it”, you’ve just broken the law. You’ve got to think about the big picture, and you know, we can’t keep going on and on in our silos, our stovepipes.
That’s a little bit of a heart to heart, but that really, I think, is the value add from all of this. The great thing about this design guidance is that it encourages you to think through life, it encourages you to think about who is going to use it and it encourages you to think about the environment. And you can quite cheaply and quite quickly, you could make some dramatic improvements in safety by thinking about these things.
I’ve met a lot of technical people, who think that if a risk control measure isn’t technical, if it isn’t highly complicated and involves clever engineering, then some people have got a tendency to look down their nose at it. What we need to be doing is looking at how we reduce risk and what the benefits are in terms of risk reduction, and it might be a really very simple thing that seems almost trivial to a technical expert that actually delivers the safety, and that’s what we’ve got to think about not about having a clever technical solution necessarily. If we must have a clever technical solution to make it safe, well, so be it. But, we’d quite like to avoid that most of the time if we can.
In Australia, in the 10 years to 2022, we have certain targets. we’ve got seven national action areas, and safe by design or safe design is one of them. As I’ve said several times, Australian legislation requires us to consult, cooperate and coordinate, so far as is reasonably practicable. And we need to work together rather than chuck problems over the wall to somebody else. You might think you delegated responsibility to somebody else, but actually, if you’re an officer of the person or conducting the business or undertaking, then you cannot ditch all of your responsibilities, so you need to think very carefully about what’s being done in your name because legally it can come back to you. you can’t just assume that somebody else is doing it and will do a reasonable job, it’s your duty to ensure that it is done, that you’ve provided the resources and that it is actually happening.
And so, what we want to do, in this 10-year period, is we want to achieve a real reduction, 30% reduction in serious injuries nationwide in that 10-year period and reduce work-related fatalities by at least a fifth. these are specific and valuable targets, they’re real-world targets. This is not an academic exercise, it’s about reducing the body count and the number of people who end up in a hospital, blinded or missing limbs or whatever. it’s important stuff. And as it says, SafeWork Australia and all the different regulators have been working together with industry unions and special interest groups in order to make this all happen. that’s all excellent stuff.
Safe Design – the End
And it just remains for me to say that most of the text that I’ve shown you is from the Safe Work Australia website, and that’s been reproduced under Creative Commons license. You can see the full details on the safetyartisan.com website. And just to point out that the words, this presentation itself are copyright of The Safety Artisan and I just realised I drafted this in 2019, it’s copyright 2020, but never mind, I started writing this in 2019.
Now, if you want more lessons on safety topics, please visit the Safety Artisan page at Patreon.com, and there are many more resources and the safety answers on the web site. And that is the end of the presentation, so thank you very much for listening and watching and from the safety artisan, I just say, I wish you a successful and safe 2020, goodbye.
Safe design is about integrating hazard identification and risk assessment methods early in the design process, to eliminate or minimise risks of injury throughout the life of a product. This applies to buildings, structures, equipment and vehicles.
Statistics and Research
Of 639 work-related fatalities from 2006 to 2011, one-third (188) were caused by unsafe design or design-related factors contributed to the fatality.
Of all fatalities where safe design was identified as an issue, one fifth (21%) was caused by inadequate protective guarding for workers.
188 work-related fatalities from 2006-2011 were caused by unsafe design.
21% of fatalities where safe design was identified as an issue were caused by inadequate guarding.
73% of all design related fatalities were from agriculture, forestry and fishing, construction and manufacturing industries.
A safe design approach
Safe design begins at the concept development phase of a structure when you’re making decisions about:
the design and its intended purpose
materials to be used
possible methods of construction, maintenance, operation, demolition or dismantling and disposal
what legislation, codes of practice and standards need to be considered and complied with.
Designers need to consider how safety can best be achieved in each of the lifecycle phases, for example:
Designing a machine with protective guarding that will allow it to be operated safely, while also ensuring it can be installed, maintained and disposed of safely.
Designing a building with a lift for occupants, where the design also includes sufficient space and safe access to the lift well or machine room for maintenance work.
Five principles of safe design
Principle 1: Persons with control—those who make decisions affecting the design of products, facilities or processes are able to promote health and safety at the source.
Principle 2: Product lifecycle—safe design applies to every stage in the lifecycle from conception through to disposal. It involves eliminating hazards or minimising risks as early in the lifecycle as possible.
Principle 3: Systematic risk management—apply hazard identification, risk assessment and risk control processes to achieve safe design.
Principle 4: Safe design knowledge and capability—should be either demonstrated or acquired by those who control design.
Principle 5: Information transfer—effective communication and documentation of design and risk control information amongst everyone involved in the phases of the lifecycle is essential for the safe design approach.
Safe design incorporates ergonomics principles as well as good work design.
Good work design helps ensure workplace hazards and risks are eliminated or minimised so all workers remain healthy and safe at work. It can involve the design of work, workstations, operational procedures, computer systems or manufacturing processes.
Responsibility for safe design
When it comes to achieving safe design, responsibility rests with those groups or individuals who control or manage design functions. This includes:
Architects, industrial designers or draftspersons who carry out the design on behalf of a client.
Individuals who make design decisions during any of the lifecycle phases such as engineers, manufacturers, suppliers, installers, builders, developers, project managers and WHS professionals.
Anyone who alters a design.
Building service designers or others designing fixed plant such as ventilation and electrical systems.
Buyers who specify the characteristics of products and materials such as masonry blocks and be default decide the weights bricklayers must handle.
Safe design can be achieved more effectively when all the parties who control and influence the design outcome collaborate on incorporating safety measures into the design.
For more information on who is responsible for safe design see Guidance on the principles of safe design for work, the Principles of Good Work Design Handbook and the model Code of Practice: Safe Design of Structures and WHS Regulations.
Design considerations for plant
Examples of things you should consider when designing plant include:
All the phases in the lifecycle of an item of plant from manufacture through use, to dismantling and disposal.
Design for safe erection and installation.
Design to facilitate safe use by considering, for example, the physical characteristics of users, the maximum number of tasks an operator can be expected to perform at any one time, the layout of the workstation or environment in which the plant may be used.
Consider intended use and reasonably foreseeable misuse.
Consider the difficulties workers may face when maintaining or repairing the plant.
Consider types of failure or malfunction and design the plant to fail in a safe manner.
The lifecycle of a product is a key concept of sustainable and safe design. It provides a framework for eliminating the hazards at the design stage and/or controlling the risk as the product is:
constructed or manufactured
imported, supplied or installed
commissioned, used or operated
maintained, repaired, cleaned, and/or modified
de-commissioned, demolished and/or dismantled
disposed of or recycled.
A safer product will be created if the hazards and risks that could impact on downstream users in the lifecycle are eliminated or controlled during design, manufacture or construction. In these early phases, there is greater scope to design-out hazards and/or incorporate risk control measures that are compatible with the original design concept and functional requirements of the product.
Designers must have a good understanding of the lifecycle of the item they are designing, including the needs of users and the environment in which that item may be used.
New risks may emerge as products are modified or the environments in which they are used change.
Safety can be further improved if each person who has control over actions taken in any of the lifecycle phases takes steps to ensure health and safety is pro-actively addressed, by reviewing the design and checking it meets safety standards in each of the lifecycle phases.
Subsequent stages of the product’s lifecycle should not go ahead until the preceding phase design reviews have been considered and approved by those with control.
Figure 2: Lifecycle of designed products
Benefits of safe design
It is estimated that inherently safe plant and equipment would save between 5–10% of their cost through reductions in inventories of hazardous materials, reduced need for protective equipment and the reduced costs of testing and maintaining the equipment.
The direct costs associated with unsafe design can be significant, for example retrofitting, workers’ compensation and insurance levies, environmental clean-up and negligence claims. Since these costs impact more on parties downstream in the lifecycle who buy and use the product, the incentive for these parties to influence and benefit from safe design is also greater.
A safe design approach results in many benefits including:
prevent injury and disease
improve useability of products, systems and facilities
better predict and manage production and operational costs over the lifecycle of a product
comply with legislation
innovate, in that safe design demands new thinking.
Australian WHS laws impose duties on a range of parties to ensure health and safety in relation to particular products such as:
designers of plant, buildings and structures
building owners and persons with control of workplaces
manufacturers, importers and suppliers of plant and substances
persons who install, erect or modify plant.
These obligations may vary depending on the relevant state, territory or Commonwealth WHS legislation.
Those who make decisions that influence design such as clients, chief financial officers, developers, builders, directors and managers will also have duties under WHS laws if they are employers, self-employed or if they manage or control workplaces.
For example, a client who has a building or structure designed and built for leasing becomes the owner of the building and may therefore have a duty as a person who manages or controls a workplace.
There are other provisions governing the design of buildings and structures in state and territory building laws. The BCA is the principal instrument for regulating architects, engineers and others involved in the design of buildings and structures.
Although the BCA provides minimum standards to ensure the health and safety of building occupants (such as structural adequacy, fire safety, amenities and ventilation), it does not cover the breadth of WHS matters that may arise during the construction phase or in the use of buildings and structures as workplaces.
In addition, there are technical design standards and guidelines produced by government agencies, Standards Australia and relevant professional bodies
Healthy and safe by design
This is one of the Seven action areas in the Australian Work Health and Safety Strategy 2012-2022.
Hazards are eliminated or minimised by design
The most effective and durable means of creating a healthy and safe working environment is to eliminate hazards and risks during the design of new plant, structures, substances and technology and of jobs, processes and systems. This design process needs to take into account hazards and risks that may be present at all stages of the lifecycle of structures, plant, products and substances.
Good design can eliminate or minimise the major physical, biomechanical and psychosocial hazards and risks associated with work. Effective design of the overall system of work will take into account, for example, management practices, work processes, schedules, tasks and workstation design.
Sustainable return to work or remaining at work while recovering from injury or illness is facilitated by good job design and management. Managers have an obligation to make reasonable adjustments to the design of the work and work processes to accommodate individuals’ differing capabilities.
Workers’ general health and wellbeing are strongly influenced by their health and safety at work. Well-designed work can improve worker health. Activities under the Australian Strategy build appropriate linkages with healthy worker programs to support improved general worker wellbeing as well as health and safety.
National activities support the following outcomes:
Structures, plant and substances are designed to eliminate or minimise hazards and risks before they are introduced into the workplace.
Work, work processes and systems of work are designed and managed to eliminate or minimise hazards and risks.
This is Mil-Std-882E 400-Series Tasks Back to the previous excerpt: 300-Series Tasks [Link TBD]
The 400-series tasks fall into two groups. Task 401 covers Safety Verfication and it is surprisingly brief for such an important task. Tasks 402 and 403 are specialist tasks related to explosives, which provide explosive-specific requirements for hazard classification and explosive ordnance disposal, respectively.
TASK 401 SAFETY VERIFICATION
401.1 Purpose. Task 401 is to define and perform tests and demonstrations or use other verification methods on safety-significant hardware, software, and procedures to verify compliance with safety requirements.
401.2 Task description. The contractor shall define and perform analyses, tests, and demonstrations; develop models; and otherwise verify the compliance of the system with safety requirements on safety-significant hardware, software, and procedures (e.g., safety verification of iterative software builds, prototype systems, subsystems, and components). Induced or simulated failures shall be considered to demonstrate the acceptable safety performance of the equipment and software.
401.2.1 When analysis or inspection cannot determine the adequacy of risk mitigation measures, tests shall be specified and conducted to evaluate the overall effectiveness of the mitigation measures. Specific safety tests shall be integrated into appropriate system Test and Evaluation (T&E) plans, including verification and validation plans.
401.2.2 Where safety tests are not feasible, the contractor shall recommend verification of compliance using engineering analyses, analogies, laboratory tests, functional mockups, or models and simulations.
401.2.3 Review plans, procedures, and the results of tests and inspections to verify compliance with safety requirements.
401.2.4 The contractor shall document safety verification results and submit a report that includes the following:
a. Test procedures conducted to verify or demonstrate compliance with the safety requirements on safety-significant hardware, software, and procedures.
b. Results from engineering analyses, analogies, laboratory tests, functional mockups, or models and simulations used.
c. T&E reports that contain the results of the safety evaluations, with a summary of the results provided.
401.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:
a. Imposition of Task 401 (R)
b. Identification of functional discipline(s) to be addressed by this task. (R)
c. Other specific hazard management requirements, e.g., specific risk definitions and matrix to be used on this program.
d. Any special data elements, format, or data reporting requirements (consider Task 106, Hazard Tracking System).
TASK 402 EXPLOSIVES HAZARD CLASSIFICATION DATA
402.1 Purpose. Task 402 is to perform tests and analyses, develop data necessary to comply with hazard classification regulations, and prepare hazard classification approval documentation associated with the development or acquisition of new or modified explosives and packages or commodities containing explosives (including all energetics).
402.2 Task description. The contractor shall provide hazard classification data to support program compliance with the Department of Defense (DoD) Ammunition and Explosives Hazard Classification Procedures (DAEHCP) (Army Technical Bulletin 700-2, Naval Sea Systems Command Instruction 8020.8, Air Force Technical Order 11A-1-47, and Defense Logistics Agency Regulation 8220.1). Such pertinent data may include:
a. Narrative information to include functional descriptions, safety features, and similarities and differences to existing analogous explosive commodities, including packaging.
b. Technical data to include Department of Defense Identification Codes (DODICs) and National Stock Numbers (NSNs); part numbers; nomenclatures; lists of explosive compositions and their weights, whereabouts, and purposes; lists of other hazardous materials and their weights, volumes, and pressures; technical names; performance or product specifications; engineering drawings; and existing relevant Department of Transportation (DOT) classification of explosives approvals.
c. Storage and shipping configuration data to include packaging details.
d. Test plans.
e. Test reports.
402.3. Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include the following, as applicable:
a. Imposition of Task 402. (R)
b. Hazard classification data requirements to support the Integrated Master Schedule. (R)
c. Hazard classification data from similar legacy systems.
d. Any special data elements or formatting requirements.
TASK 403 EXPLOSIVE ORDNANCE DISPOSAL DATA
403.1 Purpose. Task 403 is to provide Explosive Ordnance Disposal (EOD) source data, recommended render-safe procedures, and disposal considerations. Task 403 also includes the provision of test items for use in new or modified weapons systems, explosive ordnance evaluations, aircraft systems, and unmanned systems.
403.2 Task description. The contractor shall:
a. Provide detailed source data on explosive ordnance design functioning and safety so that proper EOD tools, equipment, and procedures can be validated and verified.
b. Recommend courses of action that EOD personnel can take to render safe and dispose of explosive ordnance.
c. Provide test ordnance for conducting EOD validation and verification testing. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of assets required.
d. Provide training aids for conducting EOD training. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of training aids required.
403.3 Details to be specified. The Request for Proposal (RFP) and Statement of Work (SOW) shall include, as applicable:
a. Imposition of Task 403. (R)
b. The number and types of test items for EOD validation and verification testing. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of assets required.
c. The number and types of training aids for EOD training. The Naval Explosive Ordnance Disposal Technology Division will assist in establishing quantities and types of training aids required.
A.1 Scope. This Appendix is not a mandatory part of the standard. The information contained herein is intended for guidance only. This Appendix provides guidance on the selection of the optional tasks and use of quantitative probability levels.
A.2. Task Application. The system safety effort described in Section 4 of this Standard can be augmented by identifying specific tasks that may be necessary to ensure that the contractor adequately addresses areas that the Program needs to emphasize. Consideration should be given to the complexity and dollar value of the program and the expected levels of risks involved. Table A-I provides a list of the optional tasks and their applicability to program phases. Once recommendations for task applications have been determined, tasks can be prioritized and a “rough order of magnitude” estimate should be created for the time and effort required to complete each task. This information will be of considerable value in selecting the tasks that can be accomplished within schedule and funding constraints.
TABLE A-I. Task application matrix
A.3. Quantitative Probability Example. For quantitative descriptions, the frequency is the actual or expected number of mishaps (numerator) during a specified exposure (denominator). The denominator can be based on such things as the life of one item; number of missile firings, flight hours, systems fielded, or miles driven; years of service, etc.