This is the transcript of the full video, which is available here.
Hello, everyone, and welcome to the Safety Artisan, where you will receive safety training via instructional videos on system safety, software safety and design safety. Today I’m talking about design safety. I’m Simon and I’m recording this on the 12th of January 2020, so our first recording of the new decade and let’s hope that we can give you some 20/20 vision. What we’re going to be talking about is safe design, and this safe design guidance comes from Safe Work Australia. I’m showing you some text taken from the website and adding my own commentary and experience.
The topics that we’re going to cover today are a safe design approach, five principles of safe design, ergonomics (more broadly, its human factors), who has responsibility, doing safe design through the product lifecycle, the benefits of it, our legal obligations in Australia (but this is good advice wherever you are) and the Australian approach to improving safe design in order to reduce casualties in the workplace.
The idea of safe design is it’s about integrating safety management, asset identification and risk assessment early in the design process to eliminate or reduce risks throughout the life of a product, whatever the product is, it might be a building, a structure, equipment, a vehicle or infrastructure. This is important because in Australia, in a five-year period, we suffered almost 640 work-related fatalities, of which almost 190 were caused by unsafe design or design-related factors contributed to that fatality., there’s an important reason to do this stuff, it’s not an academic exercise, we’re doing it for real reasons. And we’ll come back to the reason why we’re doing it at the end of the presentation.
A Safe Design Approach #1
First, we need to begin safe design right at the start of the lifecycle (we will see more of that later) because it’s at the beginning of the lifecycle where you’re making your bad decisions about requirements. What do you want this system to do? How do we design it to do that? What materials and components and subsystems are we going to make or buy in order to put this thing together, whatever it is? And thinking about how we are going to construct it, maintain it, operate it and then get rid of it at the end of life., there are lots of big decisions being made early in the life cycle. And sometimes these decisions are made accidentally because we don’t consciously think about what we’re doing. We just do stuff and then we realise afterwards that we’ve made a decision with sometimes quite serious implications.
A big part of my day job as a consultant is trying to help people think about those issues and make good decisions early on when it’s still cheap, quick and easy to do. Because, of course, the more you’ve invested into a project, the more difficult it is to make changes both from a financial point of view and if people have invested their time, sweat and tears into a project, they get very attached to it and they don’t want to change it. there’s an emotional investment made in the project. the earlier you get in, at the feasibility stage let’s say and think about all of this stuff the easier it is to do it. A big part of that is where is this kit going to end up? What legislation codes of practice and standards do we need to consider and comply with? So that’s the approach.
A Safe Design Approach #2
So, designers need to consider how safety can be achieved through the lifecycle. For example, can we design a machine with protective guarding so that the operator doesn’t get hurt using it, but also so the machine can be installed and maintained? That’s an important point as often to get at the stuff we must take it apart and maybe we must remove some of those safety features. How do we then protect and maintain when the machine is maybe opened up, and the workings are things that you can get caught in or electrocuted by. And how do we get rid of it? Maybe we’ve used some funky chemicals that are quite difficult to get rid of. And Australia, I suspect like many other places, we’ve got a mountain of old buildings that are full of asbestos, which is costing a gigantic sum of money to get rid of safely. we need to design a building which is fit for occupancy. Maybe we need to think about occupants that are not able-bodied or they’re moving stuff around in the building they don’t want to and need a trolley to carry stuff around. we need access, we need sufficient space to do whatever it is we need to do.
This all sounds simple, obvious, doesn’t it? So, let’s look at these five principles. First of all, a lot of this you’re going to recognise from the legal stuff because the principles of safe design are very much tied in and integrated with the Australian legal approach, WHS, which is all good, all consistent and all fits together.
5 Principles of Safe Design
Principle 1: Persons with control. If you’re making a decision that affects design and products, facilities or processes, it is your responsibility to think about safety, it’s part of your due diligence (If you recall that phrase and that session).
Principle 2: We need to apply safe design at every stage in the lifecycle, from the very beginning right through to the end. That means thinking about risks and eliminating or managing them as early as we can but thinking forward to the whole lifecycle; sounds easy, but it’s often done very badly.
Principle 3: Systematic risk management. We need to apply these things that we know about and listen to other broadcasts from The Safety Artisan. We go on and on and on about this because this is our bread and butter as safety engineers, as safety professionals – identify hazards, assess the risk and think about how we will control the risks in order to achieve a safe design.
Principal 4: Safe design, knowledge and capability. If you’re controlling the design, if you’re doing technical work or you’re managing it and making decisions, you must know enough about safe design and have the capability to put these principles into practice to the extent that you need to discharge your duties. When I’m thinking of duties, I’m especially thinking of the health and safety duties of officers, managers and people who make decisions. You need to exercise due diligence (see the Work Health and Safety lessons for more about due diligence).
Principle 5: Information transfer. Part of our duties is not just to do stuff well, but to pass on the information that the users, maintainers, disposers, etc will need in order to make effective use of the design safely. That is through all the lifecycle phases of the product.
So those are the five principles of safe design, and I think they’re all obvious really, aren’t they? So, let’s move on.
A Model for Safe Design
As the saying goes, is a picture is worth a thousand words. Here is the overview of the Safe Design Model, as they call it. We’ve got activities in a sequence running from top to bottom down the centre. Then on the left, we’ve got monitor and review, that is somebody in a management, or controlling function keeping an eye on things. On the right-hand side, we need to communicate and document what we’re doing. And of course, it’s not just documentation for documentation sake, we need to do this in order to fulfil our obligations to provide all the necessary information to users, etc. that’s the basic layout.
If we zoom in on the early stage, Pre-Design, we need to think about what problem are we trying to solve? What are we trying to do? What is the context for our enterprise? And that might be easy if you’re dealing with a physical thing. If you build a car, you make cars to be driven on the road. there’ll be a driver and maybe passengers in the car, there’ll be other road users around you, pedestrians, etc. with the physical system, it’s relatively easy with a bit of imagination and a bit of effort to think about who’s involved. But of course, not just use, but maintenance as well. Maybe it’s got to go into a garage for a service etc, how do we make things accessible for maintainers?
And then we move on to Concept Development. We might need to do some research, gather some information, think about previous systems or related systems and learn from them. We might have to consult with some people who are going to be affected, some stakeholders, who are going to be affected by this enterprise. We put all of that together and use it to help us identify hazards. Again, if we’re talking about a physical system, say if you make a new model of car, it’s probably not that different from the previous model of a car that you designed. But of course, every so often you do encounter something that is novel, that hasn’t been done before, or maybe you’re designing something that is virtual like software and software is intangible. With intangible things, it’s harder to do this. It requires a bit more effort and more imagination. It can be done, don’t be frightened of it but it does require a bit more forethought and a bit more effort and imagination than is perhaps the case with something simple and familiar like a car.
Moving on in the life cycle we have Design Options. We might think about several different solutions. We might generate some solutions; we might analyse and evaluate the risks of those options before selecting which option we’re going to go with. This doesn’t happen in reality very often, because often we’re designing something that’s familiar or people go, well actually I’m buying a bunch of kit off the shelf, (i.e. a bunch of components) and I’m just putting them together, there is no ‘optioneering’ to do.
That’s actually incorrect, because very often people do ‘optioneering’ by default, in that they buy the component that is cheap and readily available, but they don’t necessarily say, is the supplier going to provide the safety data that I need that goes along with this component? And maybe the more reputable supplier that does do that is going to charge you more. you need to think about where are you going to end up with all of this and evaluate your options accordingly. And of course, if you are making a system that is purely made from off-the-shelf components, there’s not a lot of design to do, there is just integration.
Well, that pushes all your design decisions and all your options much earlier on in the lifecycle, much higher up on the diagram as we see here. we are still making design options and design decisions, but maybe it’s just not as obvious. I’ve seen a lot of projects come unstuck because they just bought something that they like the look of, it appealed to the operators (if you live in an operator driven organisation, you’ll know what I mean). Some people buy stuff because they are magpies and it looks shiny, fun and funky! Then they buy it and people like me come along and start asking awkward questions about how are you going to demonstrate that this thing is safe to use and that you can put it in service? And then, of course, it doesn’t always end well if you don’t think about these things upfront.
So, moving on to Design Synthesis. We’ll select a solution, put stuff together, work on controlling the risks in the system that we are building. I know it says eliminating and control risks, and if you can eliminate risks then that’s great, but very often you can’t. we have to deal with the risks that we cannot get rid of and there are usually some risks in whatever you’re dealing with.
Then we get to Design Completion, where we implement the design, where we put it together and see if it does come together in the real world as we envisaged it. That doesn’t always happen. Then we have got to test it and see whether it does what it’s supposed to do. We’re normally pretty good at testing it for that, because if you set out requirements for what it’s supposed to do, then you’ve got something to test against. And of course, if you’re trying to sell a product or service or you’re trying to get regulators or customers to buy into this thing, it’s got to do what you said it’s going to do. there’s a big incentive to test the thing to make sure it does what it should do.
We’re not always so good at testing it to make sure that it doesn’t do what it shouldn’t do. That can be a bigger problem space depending on what you’re doing. And that is often the trick and that’s where safety people get involved. The Requirements Engineers, Systems Engineers are great at saying, yeah, here’s the requirements test against the requirements. And then it’s the safety people that come along and say, oh, by the way, you need to make sure that it doesn’t blow up, catch fire, get so hot that you can burn people. You need to eliminate the sharp edges. You need to make sure that people don’t get electrocuted when operating or maintaining this thing or disposing of it. You must make sure they don’t get poisoned by any chemicals that have been built into the thing. Even thinking about if I had an accident in the vehicle, or whatever it is that has been damaged or destroyed, and I’ve now got debris spread across the place, how do we clear that up? For some systems that can be a very challenging problem.
Ergonomics & Work Design
So, we’re going to move on now to a different subject, and a very important subject in safe design. I think this is one of the great things about safe design and good work design in Australia – that it incorporates ergonomics. We need to think about human interaction with the system, as well as the technical design itself, and I think that’s very important. It’s something that is very easy, especially for technical people, to miss. As engineers some of us love diving into the detail, that’s where we feel comfortable, that’s what we want to do, and then maybe we miss sometimes the big picture – somebody is actually going to use this thing and make it work. we need to think about all of our workers to make sure that they stay healthy and safe at work. We need to think about how they are going to physically interact with the system, etc. It may not be just the physical system that we’re designing, but of course, the work processes that go around it, which is important.
It is worth pointing out that in the UK I’m used to a narrow definition of ergonomics. I’m used to a definition of ergonomics that’s purely about the physical way that humans interact with the system. Can they do so safely and comfortably? Can they do repetitive tasks without getting injured? That includes anthropomorphic aspects, where we think about the variation in size of human beings, different sex’s, different races. Also, how do people fit in the machine or the vehicle or interact with it?
However, in Australia, the way we talk about ergonomics is it’s a much bigger picture than that. I would say don’t just think about ergonomics, think about human factors. It’s the science of people working. let’s understand human capabilities and apply that knowledge in the design of equipment and tools and systems and ways of working that we expect the human to use. Humans are pretty clever beasts in many ways and we’re still very good at things that a lot of machines are just not very good at. we need to design stuff which compliments the human being and helps the human being to succeed, rather than just optimising the technical design in isolation. And this quotation is from the ARPANSA definition because it was the best one that I could find in Australia. I will no doubt talk about human factors another time in some depth.
Under the law, (this is tailored for Australian law, but a lot of this is still good principles that are applicable anyway) different groups and individuals have responsibilities for safe design. Those who manage the design and the technical stuff directly and those who make resourcing decisions. For example, we can think about building architects, industrial designers, etc., who create the design. Individuals who make design decisions at any lifecycle phase, that could be a wide range of people and of course not just technical people, but stakeholders who make decisions about how people are employed, how people are to interact with these systems, how they are to maintain it and dispose of it, etc. And of course, work health and safety professionals themselves. There’s a wide range of stakeholders involved here potentially.
Also, anybody who alters the design, and it may be that we’re talking about a physical alteration to design or maybe we’re just using a piece of kit in a different context. we’re using a machine or a process or piece of software that was designed to do X, and we’re actually using it to do Y, which is more common than you might think. if we are not putting an existing design in a different context, which was never envisaged by the original designers, we need to think about the implications of both the environment on the design and the design on the environment and the human beings mixed up working in both.
There’s a lot of accidents caused by modifying bits of kit, including you might say, a signature accident in the UK: the Flixborough Chemical Plant explosion. That was one of the things that led to the creation of modern Health and Safety law in the UK. It was caused by people modifying the design and not fully realising the implications of what they were doing. Of course, the result was a gigantic explosion and lots of dead bodies. Hopefully it won’t always be so dramatic with the things that we’re looking at, but nevertheless, people do ask designs to do some weird stuff.
If we want safe design, we can get it more effectively and more efficiently when people get together who control and influence outcomes and who make these decisions so that they collaborate on building safety into the design rather than trying to add on afterwards, which in my experience never goes well. We want to get people together, think about these things up front where it’s maybe a desktop exercise or it’s a meeting somewhere. It requires some people to attend the meeting and prepare for it and so on, and we need records, but that’s cheap compared to later design stages. When we’re talking about industrial plants or something that’s going to be mass-produced, making decisions later is always going to be more costly and less effective and therefore it’s going to be less popular and harder to justify. get in and do it early while you still can. There’s some good guidance on all this stuff on who is responsible.
There is the Principles of Good Work Design Handbook, which is created by Safe Work Australia and it’s also on the Safety Artisan Website (I gained permission to reproduce that) and there’s a model Code of Practice for the safe design of structures. There was going to be a model Code of Practice for the safe design of plants, but that never became a Code of Practice, that’s just guidance. Nevertheless, there is a lot of good stuff in there. And there’s the Work, Health and Safety Regulations.
And incidentally, there’s also a lot of good guidance on Major Hazard Facilities available. Major Hazard Facilities are anywhere where you store large amounts of potentially dangerous chemicals. However, the safety principles that are in the guidance for the MHF is very good and is generally applicable not just for chemical safety, but for any large undertaking where you could hurt a lot of people on that. MHF guidance I believe was originally inspired by the COMAH regulations in the UK, again which came from a major industrial disaster, Piper Alpha platform in the North Sea which caught fire and killed 167 people. It was a big fire. if you’ve got an enterprise where you could see a mass casualty situation, you’ll get a lot more guidance from the MHF stuff that’s really aimed at preventing industrial-sized accidents. there’s lots of good stuff available to help us.
Design for Plant
So, examples of things that we should consider. We need to, (and I don’t think this will be any great surprise to you) think about all phases of the life cycle, I think we banged on about that enough. Whether it be plant (waste plant in this case), whatever it might be, from design and manufacture or right through to disposal. Can we put the plant together? Can we erect it or what structure and we install it? Can we facilitate safe use again? Again, thinking about the physical characteristics of your users, but not just physical, think about the cognitive thinking of your users. If we’re making control system, can the users use it to practically exploit the plant for the purpose it was meant for whilst staying safe? What can the operator actually do, what can we expect them to perform successfully and reliably time after time because we want this stuff to keep on working for a long, long time, in order to make money or to do whatever it is we want to do. And we also need to think about the environment in which the plant will be used – very important.
Some more examples. Think about the intended use and reasonably foreseeable misuse. If you know that a piece of kit tends to get misused for certain things, then either design against it or give the operator a better way of doing it. A really strange example, apparently the designers of a particular assault rifle knew that the soldiers tended to use a bit of the rifle as a can opener or to do stuff like that or to open beer bottles, so they incorporated a bottle opener in the design so that the soldiers would use that rather than damage the assault rifle opening bottles of beer. A crazy example there but I think it’s memorable. we have to consider by law intended use, if you go to the W.H.S lesson, you’ll see that’s written right through the duties of everybody, reasonably foreseeable misuse I don’t think is a hard requirement in every case, but it’s still a sensible thing to do.
Think about the difficulties that workers might face doing repairs or maintenance? Again, sorry, I banged on about that, I came from a maintenance world originally, so I’m very used to those kinds of challenges. And consider what could go wrong. Again, we’re getting back into classic design safety here. Think about the failure modes of your plant. Well, ideally, we always wanted a fail-safe, but if we can’t do that, well, how can we warn people? How can we make sure we minimise the risk if something goes wrong and if a foreseeable hazard occurs? And by foreseeable, I’m not just saying, well we shut ourselves in a darkened room and we couldn’t think of anything, we need to look at real-world examples of similar pieces of kit. Look at real-world history, because there’s often an awful lot of learning out there that we can exploit, if only we bother to Google it or look it up in some way. As I think it was Bismarck, the great German leader said only a fool learns from his own mistakes, a wise man learns from other people’s mistakes. that’s what we try and do in safety.
Moving onto lifecycle, this is a key concept. Again, I’ve gone on and on about this. We need to control risks, not just for use, but during construction and manufacture in transit, when it’s being commissioned and tested and used and operated when it’s being repaired, maintained, cleaned or modified. And then at the end of, I say the end of life, it may not be the end of life when it’s being decommissioned. maybe a decommissioning kit, moving it to a new site or maybe a new owner has bought it. we need to be able to safely take it upon move and put it back together again. And of course, by that stage, we may have lost the original packaging. we may have to think quite carefully about how we do this, or maybe we can’t fully disassemble it as we did during the original installation. maybe we’ve got to move an awkward bit of kit around. And then at the end of life, how are we going to dismantle it or demolish it? Are we going to dispose of it, or ideally recycle it? Hopefully, if we haven’t built in anything too nasty or too difficult to recycle, we can do that. that would be a good thing.
It’s worth reminding ourselves, we do get a safer product that is better for downstream users if we eliminate and minimise those hazards as early as we can. as I said before, in these early phases, there’s more scope in order to design out stuff without compromising the design, without putting limitations on what it can do. Whereas often when you’re adding safety in, so often that is achieved only at a cost in terms of it limits what the users can do or maybe you can’t run the plant at full capacity or whatever it might be, which is clearly undesirable. designers must have a good understanding of the lifecycle of their kit and so do those people who will interact with it and the environment in which it’s used. Again, if you’ve listened to me talking about our system safety concepts we hammer this point about it’s not just the plant it’s what you use it for, the people who will use it and the environment in which it is used. especially for complex things, we need to take all those things into account. And it’s not a trivial exercise to do this.
Then thirdly, as we go through the product life cycle, we may discover new risks, and this does happen. People make assumptions during the concept and design phase (and fair enough you must make assumptions sometimes in order to get anything done). But those assumptions don’t always turn out to be completely correct or something gets missed, we miss some aspect often. It’s the thing you didn’t anticipate that often gets you.
As we go through the lifecycle, we can further improve safety if people who have control over decisions and actions that are taken incorporate health and safety considerations at every stage and actually proactively look at whether we can make things better or whether something has occurred that we didn’t anticipate and therefore that needs to be looked into.
Another good principle that doesn’t always happen, we shouldn’t proceed to the next stage in the life cycle until we have completed our design reviews, we have thought about health and safety along with everything else, and those who have control have had a chance to consider everything together. And if they’re happy with it, to approve it and it moves on. it’s a very good illustration. Again, it will come as no surprise to many listeners there are a lot of projects out there that either don’t put in design reviews at all or you see design reviews being rushed. Lip service is paid to them very often because people forget the design reviews are there to review the design and to make sure it’s fit for purpose and safe and all the other good things, and we just get obsessed with getting through those design reviews, whether we’re the purchaser, whether we’re just keen to get on with the job and the schedule must be maintained at all costs.
Or if you’re the supplier, you want to get through those reviews because there’s a payment milestone attached to them. There’s a lot of temptation to rush these things. Often, rushing these things just results in more trouble further downstream. I know it takes a lot of guts, particularly early in a project to say, no: we’re not ready for this design review, we need to do some more work so that we can get through this properly. That’s a big call to make, often because not a lot of people are going to like you for making that call, but it does need to happen.
Benefits of Safe Design
So, let’s talk about the benefits. These are not my estimates, these are Safe Work Australia’s words, so they feel that from what they’ve seen in Australia and now surveying safety performance elsewhere, I suspect as well, that building safety into a plant can save you up to 10% of its cost. Whether it be through, an example here is reductions in holdings of hazardous materials, reduce the need for personal protective equipment, reduce need filled testing and maintenance, and that’s a good point. Very often we see large systems, large enterprises brought in to being without sufficient consideration of these things, and people think only about the capital costs of getting the kit into service. Now, if you’re spending millions or even possibly billions on a large infrastructure project, of course, you will focus on the upfront costs for that infrastructure. And of course, you are focused on getting that stuff into service as soon as possible so you can start earning money to pay for the capital costs of it.
But it’s also worth thinking about safety upfront. A lot of other design disciplines as well, of course, and making sure that you’re not building yourself a life cycle, a lifetime full of pain, doing maintenance and testing that, to be honest, you really don’t want to be doing, but because you didn’t design something out, you end up with no choice. And so, we can hopefully eliminate or minimise those direct costs with unsafe design, which can be considerable rework, compensation, insurance, environmental clean-up. You can be sued by the government for criminal transgressions and you can be sued by those who’ve been the relatives of the dead, the injured, the inconvenienced, those who’ve been moved off their land.
And these things will impact on parties downstream, not the designers. And in fact, often but not always, just the those who bought the product and used it. There’s a lot of incentive out there to minimise your liability and to get it right upfront and to be able to demonstrate they got it right upfront. Particularly if you’re a designer or a manufacturer and you’re worried that some of your users are maybe not as professionals and conscientious using your stuff as you would like because it’s still got your name and your company logo plastered all over it.
I don’t think there’s anything new in here. There are many benefits or we see the direct benefits. We’ve prevented injury and disease and that’s good. Not just your own, but other peoples. We can improve usability, very often if you improve safety through improving human factors and ergonomics, you’re going to get a more usable product that people like using, it is going to be more popular. Maybe you’ll sell more. You’ll improve productivity. those who are paying for the output are happy. You’ll reduce costs, not only reduce costs, (through life I’m talking about you might have to spend a little bit more money upfront), we can actually better predict and manage operations because we’re not having so many outages due to incidents or accidents.
Also, we can demonstrate compliance with legislation which will help you plug the kit in the first place, but also it is necessary if you’re going to get past a regulator or indeed if you don’t want to get sent to jail for contravening the WHS Act.
And benefits, well, innovation. I have to say innovation is a double-edged sword because some places love innovation, you’ll be very popular if you innovate. Other industries hate innovation and you will not be popular if you innovate. That last bullet, I’m not so sure it’s about innovation. Safety design, I don’t necessarily think it demands new thinking, it just demands thinking. Because most things that I’ve seen that are preventable, that have gone wrong and could have been stopped, it only required a little bit of thought and a little bit of imagination and a little bit of learning from the past, not ‘innovating’ the future.
So that brings us neatly on to think about our legal obligations. In Australia, and in other countries, there will be similar obligations, work, health and safety law impose duties on lots of people from designers, manufacturers, importers, suppliers, anybody who puts the stuff together, puts it up, modifies it, disposes of it. These obligations, as it says, will vary depending on the state or territory or whether Commonwealth WHS applies. But if it’s WHS, it’s all based on the model WHS from SafeWork Australia, so it will be very similar. In the WHS lesson, I talk about what these duties are and what you must do to meet them. You will be pleased to know that the guidance in safe design is in lockstep with those requirements. this is all good stuff, not because I’m saying it but because I’m actually showing you what’s come out of the statutory authority.
Yes, these obligations may vary, we talk about that quite a lot and in other sessions. Those who make decisions, and not just technical people, but those who control the finances, have duties under WHS law. Again, go and see the WHS lesson than the talks about the duties, particularly the duties of senior management officers and due diligence. And there are specific safety ‘due diligence’ requirements in WHS, which are very well written, very easy to read and understand. there’s no excuse for not looking at this stuff, it is very easy to see what you’re supposed to do and how to stay on the right side of the law. And it doesn’t matter whether you’re an employer, self-employed if you control a workplace or not, there are duties on designers upstream who will never go near the workplace that the kit is actually used in. if a client has some building or structure designed and built for leasing, they become the owner of the building and they may well retain health and safety duties for the lifetime of that building if it’s used as a workplace or to accommodate workers as well.
I just want to briefly recap on what we’ve heard. Safe design, I would say the big lesson that I’ve learned in my career is that safe design is not just a technical activity for the designers. I’ve worked in many organisations where the pedigree, the history of the organisation was that you had. technical risks were managed over here, and human or operational risks well managed over here, and there was a great a gulf between them, they never interacted very much. There was a sort of handover point where they would chuck the kit over the wall to the users and say, there, get on with it, and if you have an accident, it’s your fault because you’re stupid and you didn’t understand my piece of kit. And similarly got the operator saying all those technical people, they’ve got no idea how we use the kit or what we’re trying to do here, the big picture, they give us kit that is not suitable or that we have to misuse in order to get it to do the job.
So, if you have these two teams, players separately not interacting and not cooperating, it’s a mess. And certainly, in Australia, there are very explicit requirements in the law and regulations and the whole code of practice on consultation, communication and cooperation. These two units have got to come together, these two sides of the operation have got to come together in order to make the whole thing work. And WHS law does not differentiate between different types of risk. There is just risk to people, so you cannot hide behind the fact that, “well I do technical risk I don’t think about how people will use it”, you’ve just broken the law. You’ve got to think about the big picture, and you know, we can’t keep going on and on in our silos, our stovepipes.
That’s a little bit of a heart to heart, but that really, I think, is the value add from all of this. The great thing about this design guidance is that it encourages you to think through life, it encourages you to think about who is going to use it and it encourages you to think about the environment. And you can quite cheaply and quite quickly, you could make some dramatic improvements in safety by thinking about these things.
I’ve met a lot of technical people, who think that if a risk control measure isn’t technical, if it isn’t highly complicated and involves clever engineering, then some people have got a tendency to look down their nose at it. What we need to be doing is looking at how we reduce risk and what the benefits are in terms of risk reduction, and it might be a really very simple thing that seems almost trivial to a technical expert that actually delivers the safety, and that’s what we’ve got to think about not about having a clever technical solution necessarily. If we must have a clever technical solution to make it safe, well, so be it. But, we’d quite like to avoid that most of the time if we can.
In Australia, in the 10 years to 2022, we have certain targets. we’ve got seven national action areas, and safe by design or safe design is one of them. As I’ve said several times, Australian legislation requires us to consult, cooperate and coordinate, so far as is reasonably practicable. And we need to work together rather than chuck problems over the wall to somebody else. You might think you delegated responsibility to somebody else, but actually, if you’re an officer of the person or conducting the business or undertaking, then you cannot ditch all of your responsibilities, so you need to think very carefully about what’s being done in your name because legally it can come back to you. you can’t just assume that somebody else is doing it and will do a reasonable job, it’s your duty to ensure that it is done, that you’ve provided the resources and that it is actually happening.
And so, what we want to do, in this 10-year period, is we want to achieve a real reduction, 30% reduction in serious injuries nationwide in that 10-year period and reduce work-related fatalities by at least a fifth. these are specific and valuable targets, they’re real-world targets. This is not an academic exercise, it’s about reducing the body count and the number of people who end up in a hospital, blinded or missing limbs or whatever. it’s important stuff. And as it says, SafeWork Australia and all the different regulators have been working together with industry unions and special interest groups in order to make this all happen. that’s all excellent stuff.
Safe Design – the End
And it just remains for me to say that most of the text that I’ve shown you is from the Safe Work Australia website, and that’s been reproduced under Creative Commons license. You can see the full details on the safetyartisan.com website. And just to point out that the words, this presentation itself are copyright of The Safety Artisan and I just realised I drafted this in 2019, it’s copyright 2020, but never mind, I started writing this in 2019.
Now, if you want more lessons on safety topics, please visit the Safety Artisan page at Patreon.com, and there are many more resources and the safety answers on the web site. And that is the end of the presentation, so thank you very much for listening and watching and from the safety artisan, I just say, I wish you a successful and safe 2020, goodbye.
This is the Transcript: System Safety Concepts. The full version of the video is available here.
Transcript: System Safety Concepts
Hi everyone, and welcome to the safety artisan where you will find professional pragmatic, and impartial advice on all thing’s safety. I’m Simon and welcome to the show today, which is recorded on the 23rd of September 2019. Today we’re going to talk about System safety concepts. A couple of days ago I recorded a short presentation on this, which is on the Patreon website and is also on YouTube. Today we are going to talk about the same concepts but in much more depth.
Hence, this video is only available on the ‘Safety Artisan’ Patreon page. In the short session, we took some time picking apart the definition of ‘safe’. I’m not going to duplicate that here, so please feel free to go have a look. We said that to demonstrate that something was safe, we had to show that risk had been reduced to a level that is acceptable in whatever jurisdiction we’re working in.
And in this definition, there are a couple of tests that are appropriate that the U.K., but perhaps not elsewhere. We also must meet safety requirements. And we must define Scope and bound the system that we’re talking about a Physical system or an intangible system like a. A computer program or something. We must define what we’re doing with it what it’s being used for. And within which operating environment within which context is being used. And if we could do all those things, then we can objectively say or claim that this system is safe. OK. that’s very briefly that.
What we’re going to talk about a lot more Topics. We’re going to talk about risk accidents. The cause has a consequence sequence. They talk about requirements and. Spoiler alert. What I consider to be the essence of system safety. And then we’ll get into talking about the process. Of demonstrating safety, hazard identification, and analysis.
Risk Reduction and estimation. Risk Evaluation. And acceptance. And then pulling it all together. Risk management safety management. And finally, reporting, making an argument that the system is safe supporting with evidence. And summarizing all of that in a written report. This is what we do, albeit in different ways and calling it different things.
Onto the first topic. Risk and harm. Our concept of risk. It’s a combination of the likelihood and severity of harm. Generally, we’re talking about harm. To people. Death. Injury. Damage to help. Now we might also choose to consider any damage to property in the environment. That’s all good. But I’m going to concentrate on. Harm. To people. Because. Usually. That’s what we’re required to do. By the law. And there are other laws covering the environment and property sometimes. That. We’re not going to talk. just to illustrate this point. This risk is a combination of Severity and likelihood.
We’ve got a very crude. Risk table here. With a likelihood along the top. And severity. Downside. And we might. See that by looking at the table if we have a high likelihood and high severity. Well, that’s a high risk. Whereas if we have Low Likelihood and low severity. We might say that’s a low risk. And then. In between, a combination of high and low we might say that’s medium. Now, this is a very crude and simple example. Deliberately.
You will see risk matrices like this. In. Loads of different standards. And you may be required to define your own for a specific system, there are lots of variations on this but they’re all basically. Doing this thing and we’re illustrating. How we determine the level of risk. By that combination of severity. And likely, I think a picture is worth a thousand words. Moving online to the accident. We’re talking about (in this standard) an unintended event that causes harm.
Accidents, Sequences and Consequences
Not all jurisdictions just consider accidental event some consider deliberate as well. We’ll leave that out. A good example of that is work health and safety in Australia but no doubt we’ll get to that in another video sometime. And the accident sequences the progression of events. That results in an accident that leads to an. Now we’re going to illustrate the accident sequence in a moment but before we get there. We need to think about cousins. here we’ve got a hazard physical situation of state system. Often following some initiating event that may lead to an accident, a thing that may cause harm.
And then allied with that we have the idea of consequences. Of outcomes or an outcome. Resulting from. An. Event. Now that all sounds a bit woolly doesn’t it, let’s illustrate that. Hopefully, this will make it a lot clearer. Now. I’ve got a sequence here. We have. Causes. That might lead to a hazard. And the hazard might lead to different consequences. And that’s the accident. See. Now in this standard, they didn’t explicitly define causes.
Cause, Hazard and Consequence
They’re just called events. But most mostly we will deal with causes and consequences in system safety. And it’s probably just easier to implement it. Whether or not you choose to explicitly address every cause. That’s often option step. But this is the accident Sequence that we’re looking at. And they this sort of funnels are meant to illustrate the fact that they may be many causes for one hazard. And one has it may lead to many consequences on some of those consequences. Maybe. No harm at all.
We may not actually have an accident. We may get away with it. We may have a. Hazard. And. Know no harm may befall a human. And if we take all of this together that’s the accident sequence. Now it’s worth. Reiterating. That just because a hazard exists it does not necessarily need. Lead to harm. But. To get to harm. We must have a hazard; a hazard is both necessary and sufficient. To lead to harmful consequences. OK.
Hazards: an Example
And you can think of a hazard as an accident waiting to happen. You can think of it in lots of different ways, let’s think about an example, the hazard might be. Somebody slips. Okay well while walking and all. That slip might be caused by many things it might be a wet surface. Let’s say it’s been raining, and the pavement is slippery, or it might be icy. It might be a spillage of oil on a surface, or you’d imagine something slippery like ball bearings on a surface.
So, there’s something that’s caused the surface to become slippery. A person slips – that’s the hazard. Now the person may catch themselves; they may not fall over. They may suffer no injury at all. Or they might fall and suffer a slight injury; and, very occasionally, they might suffer a severe injury. It depends on many different factors. You can imagine if you slipped while going downstairs, you’re much more likely to be injured.
And younger, healthy, fit people are more likely to get over a fall without being injured, whereas if they’re very elderly and frail, a fall can quite often result in a broken bone. If an elderly person breaks a bone in a fall the chances of them dying within the next 12 months are quite high. They’re about one in three.
So, the level of risk is sensitive to a lot of different factors. To get an accurate picture, an accurate estimate of risk, we’re going to need to factor in all those things. But before we get to that, we’ve already said that hazard need not lead to harm. In this standard, we call it an incident, where a hazard has occurred; it could have progressed to an accident but didn’t, we call this an incident. A near miss.
We got away with it. We were lucky. Whatever you want to call it. We’ve had an incident but no he’s been hurt. Hopefully, that incident is being reported, which will help us to prevent an actual accident in future. That’s another very useful concept that reminds us that not all hazards result in harm. Sometimes there will be no accident. There will be no harm simply because we were lucky, or because someone present took some action to prevent harm to themselves or others.
Mitigation Strategies (Controls)
But we would really like to deliberately design out or avoid Hazards if we can. What we need is a mitigation strategy, we need a measure or measures that, when we put them into practice, reduce that risk. Normally, we call these things controls. Again, now we’ve illustrated this; we’ve added to the funnels. We’ve added some mitigation strategies and they are the dark blue dashed lines.
And they are meant to represent Barriers that prevent the accident sequence progressing towards harm. And they have dashed lines because very few controls are perfect, you know everything’s got holes in it. And we might have several of them. But usually, no control will cover all possible causes; and very few controls will deal with all possible consequences. That’s what those barriers are meant to illustrate.
That idea that picture will be very useful to us later. When we are thinking about how we’re going to estimate and evaluate risk overall and what risk reduction we have achieved. And how we talk about justifying what we’ve done is good. That’s a very powerful illustration. Well, let’s move on to safety requirements.
Now. I guess it’s no great surprise to say that requirements, once met, can contribute directly to the safety of the system. Maybe we’ve got a safety requirement that says all cars will be fitted with seatbelts. Let’s say we’ll be required to wear a seatbelt. That makes the system safer.
Or the requirement might be saying we need to provide evidence of the safety of the system. And, the requirement might refer to a process that we’ve got to go through or a set kind of evidence that we’ve got to provide. Safety requirements can cover either or both of these.
The Essence of System Safety
Requirements. Covering. Safety of the system or demonstrating that the system is safe. Should give us assurance, which is adequate confidence or justified confidence. Supported with evidence by following a process. And we’ll talk more about process. We meet safety requirements. We get assurance that we’ve done the right thing. And this really brings us to the essence of what system safety is, we’ve got all these requirements – everything is a requirement really – including the requirement. To demonstrate risk reduction.
And those requirements may apply to the system itself, the product. Or they may provide, or they may apply to the process that generates the evidence or the evidence. Putting all those things together in an organized and orderly way really is the essence of system safety, this is where we are addressing safety in a systematic way, in an orderly way. In an organized way. (Those words will keep coming back). That’s the essence of system safety, as opposed to the day-to-day task of keeping a workplace safe.
Maybe by mopping up spills and providing handrails, so people don’t slip over. Things like that. We’re talking about a more sophisticated level of safety. Because we have a more complex problem a more challenging problem to deal with. That’s system safety. We will start on the process now, and we begin with hazard identification and analysis; first, we need to identify and list the hazards, the Hazards and the accidents associated with the system.
We’ve got a system, physical or not. What could go wrong? We need to think about all the possibilities. And then having identified some hazards we need to start doing some analysis, we follow a process. That helps us to delve into the detail of those hazards and accidents. And to define and understand the accident sequences that could result. In fact, in doing the analysis we will very often identify some more hazards that we hadn’t thought of before, it’s not a straight-through process it tends to be an iterative process.
And what ultimately what we’re trying to do is reduce risk, we want a systematic process, which is what we’re describing now. A systematic process of reducing risk. And at some point, we must estimate the risk that we’re left with. Before and after all these controls, these mitigations, are applied. That’s risk estimation. Again, there’s that systematic word, we’re going to use all the available information to estimate the level of risk that we’ve got left. Recalling that risk is a combination of severity and likelihood.
Now as we get towards the end of the process, we need to evaluate risk against set criteria. And those criteria vary depending on which country you’re operating in or which industry we’re in: what regulations apply and what good practice is relevant. All those things can be a factor. Now, in this case, this is a U.K. standard, so we’ve got two tests for evaluating risk. It’s a systematic determination using all the available evidence. And it should be an objective evaluation as far as we can make it.
We should use certain criteria on whether a risk can be accepted or not. And in the U.K. there are two tests for this. As we’ve said before, there is ALARP, the ‘As Low As is Reasonably Practicable’ test, which says: Have we put into practice all reasonably practicable controls? (To reduce risk, this is risk reduction target). And then there’s an absolute level of risk to consider as well. Because even if we’ve taken all practical measures, the risk remaining might still be so high as to be unacceptable to the law.
Now that test is specific to the U.K, so we don’t have to worry too much about it. The point is there are objective criteria, which we must test ourselves or measure ourselves against. An evaluation that will pop out the decision, as to whether a further risk reduction is necessary if the risk level is still too high. We might conclude that are still reasonably practicable measures that we could take. Then we’ve got to do it.
We have an objective decision-making process to say: have we done enough to reduce risk? And if not, we need to do some more until we get to the point where we can apply the test again and say yes, we’ve done enough. Right, that’s rather a long-winded way of explaining that. I apologize, but it is a key issue and it does trip up a lot of people.
Now, once we’ve concluded that we’ve done enough to reduce risk and no further risk reduction is necessary, somebody should be in a position to accept that risk. Again, it’s a systematic process, by which relevant stakeholders agree that risks may be accepted. In other words, somebody with the right authority has said yes, we’re going to go ahead with the system and put it into practice, implement it. The resulting risks to people are acceptable, providing we apply the controls.
And we accept that responsibility. Those people who are signing off on those risks are exposing themselves and/or other people to risk. Usually, they are employees, but sometimes members of the public as well, or customers. If you’re going to put customers in an airliner you’re saying yes there is a level of risk to passengers, but that the regulator, or whoever, has deemed [the risk] to be acceptable. It’s a formal process to get those risks accepted and say yes, we can proceed. But again, that varies greatly between different countries, between different industries. Depending on what regulations and laws and practices apply. (We’ll talk about different applications in another section.)
Now putting all this together we call this risk management. Again, that wonderful systematic word: a systematic application of policies, procedures and practices to these tasks. We have hazard identification, analysis, risk estimation, risk evaluation, risk reduction & risk acceptance. It’s helpful to demonstrate that we’ve got a process here, where we go through these things in order. Now, this is a simplified picture because it kind of implies that you just go through the process once.
With a complex system, you go through the process at least once. We may identify further hazards, when we get into Hazard Analysis and estimating risk. In the process of trying to do those things, even as late as applying controls and getting to risk acceptance. We may discover that we need to do additional work. We may try and apply controls and discover the controls that we thought were going to be effective are not effective.
Our evaluation of the level of risk and its acceptability is wrong because it was based on the premise that controls would be effective, and we’ve discovered that they’re not, so we must go back and redo some work. Maybe as we go through, we even discover Hazards that we hadn’t anticipated before. This can and does happen, it’s not necessarily a straight-through process. We can iterate through this process. Perhaps several times, while we are moving forward.
OK, Safety Management. We’ve gone to a higher level really than risk because we’re thinking about requirements as well as risk. We’re going to apply organization, we’re going to applying management principles to achieve safety with high confidence. For the first time we’ve introduced this idea of confidence in what we’re doing. Well, I say the first time, this is insurance isn’t it? Assurance, having justified confidence or appropriate confidence, because we’ve got the evidence. And that might be product evidence too we might have tested the product to show that it’s safe.
We might have analysed it. We might have said well we’ve shown that we follow the process that gives us confidence that our evidence is good. And we’ve done all the right things and identified all the risks. That’s safety management. We need to put that in a safety management system, we’ve got a defined organization structure, we have defined processes, procedures and methods. That gives us direction and control of all the activities that we need to put together in a combination. To effectively meet safety requirements and safety policy.
And our safety tests, whatever they might be. More and more now we’re thinking about top-level organization and planning to achieve the outcomes we need. With a complex system, with a complex operating environment and a complex application.
Now I’ll just mention planning. Okay, we need a safety management plan that defines the strategy: how we’re going to get there, how are we going to address safety. We need to document that safety management system for a specific project. Planning is very important for effective safety. Safety is very vulnerable to poor planning. If a project is badly planned or not planned at all, it becomes very difficult to Do safety effectively, because we are dependent on the process, on following a rigorous process to give us confidence that all results are correct. If you’ve got a project that is a bit haphazard, that’s not going to help you achieve the objectives.
Planning is important. Now the bit of that safety plan that deals with timescales, milestones and other date-related information. We might refer to as a safety program. Now being a UK Definition, British English has two spellings of program. The double-m-e version of programme. Applies to that time-based progression, or milestone-based progression.
Whereas in the US and in Australia, for example, we don’t have those two words we just have the one word, ‘program’. Which Covers everything: computer programs, a programme of work that might have nothing to do with or might not be determined by timescales or milestones. Or one that is. But the point is that certain things may have to happen at certain points in time or before certain milestones. We may need to demonstrate safety before we are allowed to proceed to tests and trials or before we are allowed to put our system into service.
We’ve got to demonstrate that Safety has been achieved before we expose people to risk. That’s very simple. Now, finally, we’re almost at the end. Now we need to provide a demonstration – maybe to a regulator, maybe to customers – that we have achieved safety. This standard uses the concept of a safety case. The safety case is basically, imagine a portfolio full of evidence. We’ve got a structured argument to put it all together. We’ve got a body of the evidence that supports the argument.
It provides a Compelling, Comprehensible (or understandable) and valid case that a system is safe. For a given application or use, in a given Operating environment. Really, that definition of what a safety case is harks back to that meaning of safety. We’ve got something that really hits the nail on the head. And we might put all of that together and summarise it in a safety case report. That summarises those arguments and evidence, and documents progress against the Safe program.
Remember I said our planning was important. We started off saying that we need to do this, that the other in order to achieve safety. Hopefully, in the end, in the safety report we’ll be able to state that we’ve done exactly that. We did do all those things. We did follow the process rigorously. We’ve got good results. We’ve got a robust safety argument. With evidence to support it. At the end, it’s all written up in a report.
Now that isn’t always going to be called a safety case report; it might be called a safety assessment report or a design justification report. There are lots of names for these things. But they all tend to do the same kind of thing, where they pull together the argument as to why the system is safe. The evidence to support the argument, document progress against a plan or some set of process requirements from a standard or a regulator or just good practice in an industry to say: Yes, we’ve done what we were expected to do.
The result is usually that’s what justifies [the system] getting past that milestone. Where the system is going into service and can be used. People can be exposed to those risks, but safely and under control.
Everyone’s a winner, as they say!
Copyright – Creative Commons Licence
Okay. I’ve used a lot of information from the UK government website. I’ve done that in accordance with the terms of its creative commons license, and you can see more about that [here]. We have we complied with that, as we are required to, and to say to you that the information we’ve supplied is under the terms of this license.
And for more resources and for more lessons on system safety. And other safe topics. I invite you to visit the safety artisan.com website or to go and look at the videos on Patreon, at my safety artisan page. And that’s www.Patreon.com/SafetyArtisan. Thanks very much for watching. I hope you found that useful.
We’ve covered a lot of information there, but hopefully in a structured way. We’ve repeated the key concepts and you can see that in that standard. The key concepts are consistently defined, and they reinforce each other. In order to get that systematic, disciplined approach to safety, that’s we need.
Anyway, that’s enough from me. I hope you enjoyed watching and found that useful. I look forward to talking to you again soon. Please send me some feedback about what you thought about this video and also what you would like to see covered in the future.
Thank you for visiting the Safety Artisan. I look forward to talking to you again soon. Goodbye.
Transcript: system safety concepts – Links
You can see the full video at the Safety Artisan Patreon Page!
You can see the Short Video posted here.