Categories
Human Factors System Safety

Introduction to Human Factors

In this 40-minute video, ‘Introduction to Human Factors’, I am very pleased to welcome Peter Benda to The Safety Artisan.

Peter is a colleague and Human Factors specialist, who has 23 years’ experience in applying Human Factors to large projects in all kinds of domains. In this session we look at some fundamentals: what does Human Factors engineering aim to achieve? Why do it? And what sort of tools and techniques are useful? As this is The Safety Artisan, we also discuss some real-world examples of how erroneous human actions can contribute to accidents, and how Human Factors discipline can help to prevent them.

Topics: Introduction to Human Factors

  • Introducing Peter;
  • The Joint Optimization Of Human-Machine Systems;
  • So why do it (HF)?
  • Introduction to Human Factors;
  • Definitions of Human Factors;
  • The Long Arm of Human Factors;
  • What is Human Factors Integration? and
  • More HF sessions to come…

Transcript: Introduction to Human Factors

Click Here for the Transcript

Transcript: Intro to Human Factors

Introduction

Simon:  Hello, everyone, and welcome to the Safety Artisan: Home of Safety Engineering Training. I’m Simon and I’m your host, as always. But today we are going to be joined by a guest, a Human Factors specialist, a colleague, and a friend of mine called Peter Benda. Now, Peter started as one of us, an ordinary engineer, but unusually, perhaps for an engineer, he decided he didn’t like engineering without people in it. He liked the social aspects and the human aspects and so he began to specialize in that area. And today, after twenty-three years in the business, and first degree and a master’s degree in engineering with a Human Factors speciality. He’s going to join us and share his expertise with us.

So that’s how you got into it then, Peter. For those of us who aren’t really familiar with Human Factors, how would you describe it to a beginner?

Peter:   Well, I would say it’s The Joint Optimization Of Human-Machine Systems. So it’s really focusing on designing systems, perhaps help holistically would be a term that could be used, where we’re looking at optimizing the human element as well as the machine element. And the interaction between the two. So that’s really the key to Human Factors. And, of course, there are many dimensions from there; environmental, organizational, job factors, human and individual characteristics. All of these influence behaviour at work and health and safety. Another way to think about it is the application of scientific information concerning humans to the design of systems. Systems are for human use, which I think most systems are.

Simon:  Indeed. Otherwise, why would humans build them?

Peter:   That’s right. Generally speaking, sure.

Simon:  So, given that this is a thing that people do then. Perhaps we’re not so good at including the human unless we think about it specifically?

Peter:   I think that’s fairly accurate. I would say that if you look across industries, and industries are perhaps better at integrating Human Factors, considerations or Human Factors into the design lifecycle, that they have had to do so because of the accidents that have occurred in the past. You could probably say this about safety engineering as well, right?

Simon:  And this is true, yes.

Peter:   In a sense, you do it because you have to because the implications of not doing it are quite significant. However, I would say the upshot, if you look at some of the evidence –and you see this also across software design and non-safety critical industries or systems –that taking into account human considerations early in the design process typically ends up in better system performance. You might have more usable systems, for example. Apple would be an example of a company that puts a lot of focus into human-computer interaction and optimizing the interface between humans and their technologies and ensuring that you can walk up and use it fairly easily. Now as time goes on, one can argue how out how well Apple is doing something like that, but they were certainly very well known for taking that approach.

Simon:  And reaped the benefits accordingly and became, I think, they were the world’s number one company for a while.

Peter:   That’s right. That’s right.

Simon:  So, thinking about the, “So why do it?” What is one of the benefits of doing Human Factors well?

Peter:   Multiple benefits, I would say. Clearly, safety and safety-critical systems, like health and safety; Performance, so system performance; Efficiency and so forth. Job satisfaction and that has repercussions that go back into, broadly speaking, that society. If you have meaningful work that has other repercussions and that’s sort of the angle I originally came into all of this from. But, you know, you could be looking at just the safety and efficiency aspects.

Simon:  You mentioned meaningful work: is that what attracted you to it?

Peter:   Absolutely. Absolutely. Yes. Yes, like I said I had a keen interest in the sociology of work and looking at work organization. Then, for my master’s degree, I looked at lean production, which is the Toyota approach to producing vehicles. I looked at multiskilled teams and multiskilling and job satisfaction. Then looking at stress indicators and so forth versus mass production systems. So that’s really the angle I came into this. If you look at it, mass production lines where a person is doing the same job over and over, it’s quite repetitive and very narrow, versus the more Japanese style lean production. There are certainly repercussions, both socially and individually, from a psychological health perspective.

Simon:  So, you get happy workers and more contented workers-

Peter:   –And better quality, yeah.

Simon:  And again, you mentioned Toyota. Another giant company that’s presumably grown partly through applying these principles.

Peter:   Well, they’re famous for quality, aren’t they? Famous for reliable, high-quality cars that go on forever. I mean, when I moved from Canada to Australia, Toyota has a very, very strong history here with the Land Cruiser, and the high locks, and so forth.

Simon:  All very well-known brands here. Household names.

Peter:   Are known to be bombproof and can outlast any other vehicle. And the lean production system certainly has, I would say, quite a bit of responsibility for the production of these high-quality cars.

Simon:  So, we’ve spoken about how you got into it and “What is it?” and “Why do it?” I suppose, as we’ve said, what it is in very general terms but I suspect a lot of people listening will want to know to define what it is, what Human Factors is, based on doing it. On how you do it. It’s a long, long time since I did my Human Factors training. Just one module in my masters, so could you take me through what Human Factors involves these days in broad terms.

Peter:   Sure, I actually have a few slides that might be useful –  

Simon:  – Oh terrific! –

Peter:   –maybe I should present that. So, let me see how well I can share this. And of course, sometimes the problem is I’ll make sure that – maybe screen two is the best way to share it. Can you see that OK?

Simon:  Yeah, that’s great.

Introduction to Human Factors

Peter:   Intro to Human Factor. So, as Stewart Dickinson, who I work with at human risk solutions and I have prepared some material for some courses we taught to industry. I’ve some other material and I’ll just flip to some of the key slides going through “What is Human Factors”. So, let me try to get this working and I’ll just flip through quickly.

Definitions of Human Factors

Peter:   So, as I’ve mentioned already, broadly speaking, environmental, organizational, and job factors, and human individual characteristics which influence behaviour at work in a way that can which can affect health and safety. That’s a focus of Human Factors. Or the application of scientific information concerning humans to the design of objects, systems and environments for human use. You see a pattern here, fitting work to the worker. The term ergonomics is used interchangeably with Human Factors. It also depends on the country you learn this in or applied in.

Simon:  Yes. In the U.K., I would be used to using the term ergonomics to describe something much narrower than Human Factors but in Australia, we seem to use the two terms as though they are the same.

Peter:   It does vary. You can say physical ergonomics and I think that would typically represent when people think of ergonomics, they think of the workstation design. So, sitting at their desk, heights of tables or desks, and reach, and so on. And particularly given the COVID situation, there are so many people sitting at their desks are probably getting some repetitive strain –

Simon:  –As we are now in our COVID 19 [wo]man caves.

Peter:   That’s right! So that’s certainly an aspect of Human Factors work because that’s looking at the interaction between the human and the desk/workstation system, so to speak, on a very physical level.        

            But of course, you have cognitive ergonomics as well, which looks of perceptual and cognitive aspects of that work. So Human Factors or ergonomics, broadly speaking, would be looking at these multi-dimensional facets of human interaction with systems.

Definitions of Human Factors (2)

Peter:   Some other examples might be the application of knowledge of human capabilities and limitations to design, operation and maintenance of technological systems, and I’ve got a little distilled –or summarized- bit on the right here. The Human Factors apply scientific knowledge to the development and management of the interfaces between humans and rail systems. So, this is obviously in the rail context so you’re, broadly speaking, talking in terms of technological systems. That covers all of the people issues. We need to consider to assure safe and effective systems or organizations.

Again, this is very broad. Engineers often don’t like these broad topics or broad approaches. I’m an engineer, I learned this through engineering which is a bit different than how some people get into Human Factors.

Simon:  Yeah, I’ve met a lot of human factor specialists who come in from a first degree in psychology.

Peter:   That’s right. I’d say that’s fairly common, particularly in Australia and the UK. Although, I know that you could take it here in Australia in some of the engineering schools, but it’s fairly rare. There’s an aviation Human Factors program, I think, at Swinburne University. They used to teach it through mechanical engineering there as well. I did a bit of teaching into that and I’m not across all of the universities in Australia, but there are a few. I think the University of the Sunshine Coast has quite a significant group at the moment that’s come from, or, had some connection to Monash before that. Well, I think about, when I’m doing this work, of “What existing evidence do we have?” Or existing knowledge base with respect to the human interactions with the system. For example, working with a rail transport operator, they will already have a history of incidents or history of issues and we’d be looking to improve perhaps performance or reduce the risk associated with the use of certain systems. Really focusing on some of the evidence that exists either already in the organization or that’s out there in the public domain, through research papers and studies and accident analyses and so forth. I think much like safety engineering, there would be some or quite a few similarities in terms of the evidence base –

Simon:  – Indeed.

Peter:   – Or creating that evidence through analysis. So, using some analytical techniques, various Human Factors methods and that’s where Human Factors sort of comes into its own. It’s a suite of methods that are very different from what you would find in other disciplines.

Simon:  Sure, sure. So, can you give us an overview of these methods, Peter?

Peter:   There are trying to think of a slide for this. Hopefully, I do.

Simon:  Oh, sorry. Have I taken you out of sequence?

Peter:   No, no. Not out of sequence. Let me just flip through, and take a look at –

The Long Arm of Human Factors

Peter:   This is probably a good sort of overview of the span of Human Factors, and then we can talk about the sorts of methods that are used for each of these – let’s call them –dimensions. So, we have what’s called the long arm of Human Factors. It’s a large range of activities from the very sort of, as we’re talking about, physical ergonomics, e.g. sitting at a desk and so on, manual handling, workplace design, and moving to interface design with respect to human-machine interfaces- HMIs, as they’re called, or user interfaces. There are techniques, manual handling techniques and analysis techniques – You might be using something like a task analysis combined with a NIOSH lifting equation and so on. Workplace design, you’d be looking at anthropocentric data. So, you would have a dataset that’s hopefully representative of the population you’re designing for, and you may have quite specific populations. So Human Factors, engineering is fairly extensively used, I would say, in military projects –in the military context-

Simon:  – Yes.

Peter:   – And there’s this set of standards, the Mil standard, 1472G, for example, from the United States. It’s a great example that gives not only manual handling standards or guidelines, workplace design guidelines in the workplace, in a military sense, can be a vehicle or on a ship and so on. Or on a base and so forth.

Interface design- So, if you’re looking at from a methods perspective, you might have usability evaluations, for example. You might do workload’s studies and so forth, looking at how well the interface supports particular tasks or achieving certain goals.

            Human error –There are human error methods that typically leverage off of task models. So, you’d have a task model and you would look at for that particular task, what sorts of errors could occur and the structured methods for that?

Simon:  Yes, I remember human task analysis –seeing colleagues use that on a project I was working on. It seemed quite powerful for capturing these things.

Peter:   It is and you have to pragmatically choose the level of analysis because you could go down to a very granular level of detail. But that may not be useful, depending on the sort of system design you’re doing, the amount of money you have, and how critical the task is. So, you might have a significantly safety-critical task, and that might need quite a detailed analysis. An example there would be – there was a … I think it’s the … You can look up the accident analysis online, I believe it’s the Virgin Galactic test flight. So this is one of these test flights in the U.S. – I have somewhere in my archive of accident analyses – where the FAA had approved the test flights to go ahead and there was a task where – I hope I don’t get this completely wrong – where one of the pilots (there are two pilots, a pilot and a co-pilot) and this test aeroplane where they had to go into high-altitude in this near-space vehicle. They were moving at quite a high speed and there was a particular task where they had to do something with – I think they had to slow down and then you could … slow down their aeroplane, I guess, by reducing the throttle and then at a certain point/a certain speed, you could deploy, or control, the ailerons or some such, wing-based device, and the task order was very important. And what had happened was a pilot or the co-pilot had performed the task slightly out of order. As a matter of doing one thing first before they did another thing that led to the plane breaking up. And fortunately, one of the pilots survived, unfortunately, one didn’t.

Simon:  So, very severe results from making a relatively small mistake.

Peter:   So that’s a task order error, which is very easy to do. And if the system had been designed in a way to prevent that sort of capability to execute that action at that point. That would have been a safer design. At that level, you might be going down to that level of analysis and kind of you get called keystroke level analysis and so on

Simon:  – Where it’s justified, yes.

Peter:   Task analysis is, I think, probably one of the most common tools used. You also have workload analysis, so looking at, for example, interface design. I know some of the projects we were working on together, Simon, workload was a consideration. There are different ways to measure workload. There’s a NASA TLX, which is a subjective workload. Questionnaire essentially, that’s done post-task but it’s been shown to be quite reliable and valid as well. So, that instrument is used and there are a few others that are used. It depends on the sort of study you’re doing, the amount of time you have and so forth. Let me think, that’s workload analysis.

Safety culture- I wouldn’t say that’s my forte. I’ve done a bit of work on safety culture, but that’s more organizational and the methods there tend to be more around culpability models and implementing those into the organizational culture.

Simon:  So, more governance type issues? That type of thing?

Peter:   Yes. Governance and – whoops! Sorry, I didn’t mean to do that. I’m just looking at the systems and procedure design. The ‘e’ is white so it looks like it’s a misspelling there. So it’s annoying me …

Simon:  – No problem!

Peter:   Yes. So, there are models I’ve worked with at organization such as some rail organizations where they look at governance, but also in terms of appropriate interventions. So, if there’s an incident, what sort of intervention is appropriate? So, essentially use sort of a model of culpability and human error and then overlay that or use that as a lens upon which to analyse the incident. Then appropriately either train employees or management and so on. Or perhaps it was a form of violation, a willful violation, as it may be –

Simon:  – Of procedure?

Peter:   Yeah, of procedure and so on versus a human error that was encouraged by the system’s design. So, you shouldn’t be punishing, let’s say, a train driver for a SPAD if the –

Simon:  – Sorry, that’s a Signal Passed At Danger, isn’t it?

Peter:   That’s right. Signal Passed At Danger. So, it’s certainly possible that the way the signalling is set up leads to a higher chance of human error. You might have multiple signals at a location and it’s confusing to figure out which one to attend to and you may misread and then you end up SPADing and so on. So, there are, for example, clusters of SPADs that will be analysed and then the appropriate analysis will be done. And you wouldn’t want to be punishing drivers if it seemed to be a systems design issue.

Simon:  Yes. I saw a vivid illustration of that on the news, I think, last night. There was a news article where there was an air crash that tragically killed three people a few months ago here in South Australia. And the newsies report today is saying it was human error but when they actually got in to reporting what had happened, it was pointed out that the pilot being tested was doing – It was a twin-engine aeroplane and they were doing an engine failure after take-off drill. And the accident report said that the procedure that they were using allowed them to do that engine failure drill at too low an altitude. So, if the pilot failed to take the correct action very quickly – bearing in mind this is a pilot being tested because they are undergoing training – there was no time to recover. So, therefore, the aircraft crashed. So, I thought, ”Well, it’s a little bit unfair just to say it’s a human error when they were doing something that was in intrinsically inappropriate for a person of that skill level.”

Peter:   That’s an excellent example and you hear this in the news a lot. Human error, human error and human error. The cause of this, I think, with the recent Boeing problems with the flight control system for the new 737s. And of course, there will be reports. Some of the interim reports already talk about some of these Human Factors, issues inherent in that, and I would encourage people to look up the publicly available documentation on that-

Simon:  – This is the Boeing 737 Max accidents in Indonesia and in Ethiopia, I think.

Peter:   That’s correct. That’s correct. Yes, absolutely. And pilot error was used as the general explanation but under further analysis, you started looking at that error. That so to speak error perhaps has other causes which are systems design causes, perhaps. So these things are being investigated but have been written about quite extensively. And you can look at, of course, any number of aeroplane accidents and so on. There’s a famous Air France one flying from Brazil to Paris, from what I recall. It might have been Rio de Janeiro to Paris. Where the pitot –

Simon:  – Yeah, pitot probes got iced up.

Peter:    Probes, they iced up and it was dark. So, the pilots didn’t have any ability to gauge by looking outside. I believe it was dark or it might have been a storm. There’s some difficulty in engaging what was going on outside of the aeroplane and there again misreads. So, stall alarms going off and so off, I believe. There were some mis-readings on the airspeed coming from the sensors, essentially. And then the pilots acted according to that information, but that information was incorrect. So, you could say there were probably a cascade of issues that occurred there and there’s a fairly good analysis one can look up that looks at the design. I believe it was an Airbus. It was the design of the Airbus. So, we had one pilot providing an input in one direction to the control yoke and the other pilot in the other direction. There are a number of things that broke down. And typically, you’ll see this in accidents. You’ll have a cascade as they’re trying to troubleshoot and can’t figure out what’s going on they’ll start applying various approaches to try and remedy the situation and people begin to panic and so on.

            And you have training techniques, a crew resource management, which certainly has a strong Human Factors element or comes out of the Human Factors world, which looks at how to have teams and cockpits. And in other situations working effectively in emergency situations And that’s sort of after analysing, of course, failures.

Simon:  Yes, and I think CRM, crew resource management, has been adopted not just in the airline industry, but in many other places as well, hasn’t it?

Peter:   Operating theatres, for example. There’s quite a bit of work in the 90s that started with I think it was David Gaba who I think was at Stanford – this is all from memory. That then look at operating theatres. In fact, the Monash Medical Centre in Clayton had a simulation centre for operating theatres where they were applying these techniques to training operating theatre personnel. So, surgeons, anaesthetists, nurses and so forth.

Simon:  Well, thanks, Peter. I think and I’m sorry, I think I hijacked you’ll the presentation, but –

Peter:   It’s not really a presentation anyway. It was more a sort of better guidance there. We’re talking about methods, weren’t we? And it’s easy to go then from methods to talking about accidents. Because then we talk about the application of some of these methods or if these methods are applied to prevent accidents from occurring.

Simon:  Cool. Well, thanks very much, Peter. I think maybe I’ll let the next time we have a chat I’ll let you talk through your slides and we’ll have a more in-depth look across the whole breadth of Human Factors.

Peter:   So that’s probably a good little intro at the moment anyway. Perhaps I might pull up one slide on Human Factors integration before we end.

Simon:  Of course.

Peter:   I’ll go back a few slides here.

What is Human Factors Integration?

Peter:   And so what is Human Factors integration? I was thinking about this quite a bit recently because I’m working on some complex projects that are very, well, not only complex but quite large engineering projects with lots of people, lots of different groups involved, different contracts and so forth. And the integration issues that occur. They’re not only Human Factors integration issues there are larger-scale integration issues, engineering integration issues. Generally speaking, this is something I think that projects often struggle with. And I was really thinking about the Human Factors angle and Human Factors integration. That’s about ensuring that all of the HF issues, so HF in Human Factors, in a project are considered in control throughout the project and deliver the desired performance and safety improvements. So, three functions of Human Factors integration

  • confirm the intendant system performance objectives and criteria
  • guide and manage the Human Factors, aspects and design cycles so that negative aspects don’t arise and prevent the system reaching its optimum performance level
  • and identify and evaluate any additional Human Factors safety aspect now or we found in the safety case.

You’ll find, particularly in these complex projects, that the interfaces between the –  you might have quite a large project and have some projects working on particular components. Let’s say one is working on more of a civil/structural elements and maybe space provisioning and so on versus another one is working more on control systems. And the integration between those becomes quite difficult because you don’t really have that Human Factors integration function working to integrate those two large components. Typically, it’s within those focused project groupings –that’s the way to call them. Does that make sense?

Simon:  Yeah. Yeah, absolutely.

Peter:   I think that’s one of the big challenges that I’m seeing at the moment, is where you have a certain amount of time and money and resource. This would be common for other engineering disciplines and the integration work often falls by the wayside, I think. And that’s where I think a number of the ongoing Human Factors issues are going to be cropping up some of these large-scale projects for the next 10 to 20 years. Both operationally and perhaps safety as well. Of course, we want to avoid –

Simon:  –Yes. I mean, what you’re describing sounds very familiar to me as a safety engineer and I suspect to a lot of engineers of all disciplines who work on large projects. They’re going to recognize that as it is a familiar problem.

Peter:   Sure. You can think about if you’ve got the civil and space provisioning sort of aspect of a project and another group is doing what goes into, let’s say, a room into a control room or into a maintenance room and so on. It may be that things are constrained in such a way that the design of the racks in the room has to be done in a way that makes the work more difficult for maintainers. And it’s hard to optimize these things because these are complex projects and complex considerations. And a lot of people are involved in them. The nature of engineering work is typically to break things down into little elements, optimize those elements and bring them all together.

Simon:  –Yes.

Peter:   Human Factors tends to –Well, you can do them Human Factors as well but I would argue that certainly what attracted me to it, is that you tend to have to take a more holistic approach to human behaviour and performance in a system.

Simon:  Absolutely.

Peter:   Which is hard.

Simon:   Yes, but rewarding. And on that note, thanks very much, Peter. That’s been terrific. Very helpful. And I look forward to our next chat.

Peter:   For sure. Me too. Okay, thanks!

Simon:  Cheers!

Outro

Simon:  Well, that was our first chat with Peter on the Safety Artisan and I’m looking forward to many more. So, it just remains for me to say thanks very much for watching and supporting the work of what we’re doing and what we’re trying to achieve. I look forward to seeing you all next time. Okay, goodbye.

End: Introduction to Human Factors

Categories
Start Here System Safety

Safety Concepts Part 2

In this 33-minute session, Safety Concepts Part 2, The Safety Artisan equips you with more Safety Concepts. We look at the basic concepts of safety, risk, and hazard in order to understand how to assess and manage them. Exploring these fundamental topics provides the foundations for all other safety topics, but it doesn’t have to be complex. The basics are simple, but they need to be thoroughly understood and practiced consistently to achieve success. This video explains the issues and discusses how to achieve that success.

This is the three-minute demo of the full (33 minute) Safety Concepts, Part 2 video.

Safety Concepts Part 2: Topics

  • Risk & Harm;
  • Accident & Accident Sequence;
  • (Cause), Hazard, Consequence & Mitigation;
  • Requirements / Essence of System Safety;
  • Hazard Identification & Analysis;
  • Risk Reduction / Estimation;
  • Risk Evaluation & Acceptance;
  • Risk Management & Safety Management; and
  • Safety Case & Report.

Safety Concepts Part 2: Transcript

Click Here for the Transcript

Hi everyone, and welcome to the safety artisan where you will find professional, pragmatic, and impartial advice on safety. I’m Simon, and welcome to the show today, which is recorded on the 23rd of September 2019. Today we’re going to talk about system safety concepts. A couple of days ago I recorded a short presentation (Part 1) on this, which is also on YouTube.  Today we are going to talk about the same concepts but in much more depth.

In the short session, we took some time picking apart the definition of ‘safe’. I’m not going to duplicate that here, so please feel free to go have a look. We said that to demonstrate that something was safe, we had to show that risk had been reduced to a level that is acceptable in whatever jurisdiction we’re working in.

And in this definition, there are a couple of tests that are appropriate that the U.K., but perhaps not elsewhere. We also must meet safety requirements. And we must define Scope and bound the system that we’re talking about a Physical system or an intangible system like a computer program. We must define what we’re doing with it what it’s being used for. And within which operating environment within which context is being used.  And if we could do all those things, then we can objectively say – or claim – that the system is safe.

Topics

We’re going to talk about a lot more Topics. We’re going to talk about risk accidents. The cause has a consequence sequence. They talk about requirements and. Spoiler alert. What I consider to be the essence of system safety. And then we’ll get into talking about the process. Of demonstrating safety, hazard identification, and analysis.

Risk Reduction and estimation. Risk Evaluation. And acceptance. And then pulling it all together. Risk management safety management. And finally, reporting, making an argument that the system is safe supporting with evidence. And summarizing all of that in a written report. This is what we do, albeit in different ways and calling it different things.

Risk

Onto the first topic. Risk and harm.  Our concept of risk. It’s a combination of the likelihood and severity of harm. Generally, we’re talking about harm. To people. Death. Injury. Damage to help. Now we might also choose to consider any damage to property in the environment. That’s all good. But I’m going to concentrate on. Harm. To people. Because. Usually. That’s what we’re required to do. By the law. And there are other laws covering the environment and property sometimes. That. We’re not going to talk.  just to illustrate this point. This risk is a combination of Severity and likelihood.

We’ve got a very crude. Risk table here. With a likelihood along the top. And severity. Downside. And we might. See that by looking at the table if we have a high likelihood and high severity. Well, that’s a high risk. Whereas if we have Low Likelihood and low severity. We might say that’s a low risk. And then. In between, a combination of high and low we might say that’s medium. Now, this is a very crude and simple example. Deliberately.

You will see risk matrices like this. In. Loads of different standards. And you may be required to define your own for a specific system, there are lots of variations on this but they’re all basically. Doing this thing and we’re illustrating. How we determine the level of risk. By that combination of severity. And likely, I think a picture is worth a thousand words. Moving online to the accident. We’re talking about (in this standard) an unintended event that causes harm.

Accidents, Sequences and Consequences

Not all jurisdictions just consider accidental event some consider deliberate as well. We’ll leave that out. A good example of that is work health and safety in Australia but no doubt we’ll get to that in another video sometime. And the accident sequences the progression of events. That results in an accident that leads to an. Now we’re going to illustrate the accident sequence in a moment but before we get there. We need to think about cousins.  here we’ve got a hazard physical situation of state system. Often following some initiating event that may lead to an accident, a thing that may cause harm.

And then allied with that we have the idea of consequences. Of outcomes or an outcome. Resulting from. An. Event. Now that all sounds a bit woolly doesn’t it, let’s illustrate that. Hopefully, this will make it a lot clearer. Now. I’ve got a sequence here. We have. Causes. That might lead to a hazard. And the hazard might lead to different consequences. And that’s the accident. See. Now in this standard, they didn’t explicitly define causes.

Cause, Hazard and Consequence

They’re just called events. But most mostly we will deal with causes and consequences in system safety. And it’s probably just easier to implement it. Whether or not you choose to explicitly address every cause. That’s often option step. But this is the accident Sequence that we’re looking at. And they this sort of funnels are meant to illustrate the fact that they may be many causes for one hazard. And one has it may lead to many consequences on some of those consequences. Maybe. No harm at all.

We may not actually have an accident. We may get away with it. We may have a. Hazard. And. Know no harm may befall a human. And if we take all of this together that’s the accident sequence. Now it’s worth. Reiterating. That just because a hazard exists it does not necessarily need. Lead to harm. But. To get to harm. We must have a hazard; a hazard is both necessary and sufficient. To lead to harmful consequences. OK.

Hazards: an Example

And you can think of a hazard as an accident waiting to happen. You can think of it in lots of different ways, let’s think about an example, the hazard might be. Somebody slips. Okay well while walking and all. That slip might be caused by many things it might be a wet surface. Let’s say it’s been raining, and the pavement is slippery, or it might be icy. It might be a spillage of oil on a surface, or you’d imagine something slippery like ball bearings on a surface.

So, there’s something that’s caused the surface to become slippery. A person slips – that’s the hazard. Now the person may catch themselves; they may not fall over. They may suffer no injury at all. Or they might fall and suffer a slight injury; and, very occasionally, they might suffer a severe injury. It depends on many different factors. You can imagine if you slipped while going downstairs, you’re much more likely to be injured.

And younger, healthy, fit people are more likely to get over a fall without being injured, whereas if they’re very elderly and frail, a fall can quite often result in a broken bone. If an elderly person breaks a bone in a fall the chances of them dying within the next 12 months are quite high. They’re about one in three.

So, the level of risk is sensitive to a lot of different factors. To get an accurate picture, an accurate estimate of risk, we’re going to need to factor in all those things. But before we get to that, we’ve already said that hazard need not lead to harm. In this standard, we call it an incident, where a hazard has occurred; it could have progressed to an accident but didn’t, we call this an incident. A near miss.

We got away with it. We were lucky. Whatever you want to call it. We’ve had an incident but no he’s been hurt. Hopefully, that incident is being reported, which will help us to prevent an actual accident in future.  That’s another very useful concept that reminds us that not all hazards result in harm. Sometimes there will be no accident. There will be no harm simply because we were lucky, or because someone present took some action to prevent harm to themselves or others.

Mitigation Strategies (Controls)

But we would really like to deliberately design out or avoid Hazards if we can. What we need is a mitigation strategy, we need a measure or measures that, when we put them into practice, reduce that risk. Normally, we call these things controls. Again, now we’ve illustrated this; we’ve added to the funnels. We’ve added some mitigation strategies and they are the dark blue dashed lines.

And they are meant to represent Barriers that prevent the accident sequence progressing towards harm. And they have dashed lines because very few controls are perfect, you know everything’s got holes in it. And we might have several of them. But usually, no control will cover all possible causes; and very few controls will deal with all possible consequences.  That’s what those barriers are meant to illustrate.

That idea that picture will be very useful to us later. When we are thinking about how we’re going to estimate and evaluate risk overall and what risk reduction we have achieved. And how we talk about justifying what we’ve done is good. That’s a very powerful illustration. Well, let’s move on to safety requirements.

Safety Requirements

Now. I guess it’s no great surprise to say that requirements, once met, can contribute directly to the safety of the system. Maybe we’ve got a safety requirement that says all cars will be fitted with seatbelts. Let’s say we’ll be required to wear a seatbelt.  That makes the system safer.

Or the requirement might be saying we need to provide evidence of the safety of the system. And, the requirement might refer to a process that we’ve got to go through or a set kind of evidence that we’ve got to provide. Safety requirements can cover either or both of these.

The Essence of System Safety

Requirements. Covering. Safety of the system or demonstrating that the system is safe. Should give us assurance, which is adequate confidence or justified confidence. Supported with evidence by following a process. And we’ll talk more about process. We meet safety requirements. We get assurance that we’ve done the right thing. And this really brings us to the essence of what system safety is, we’ve got all these requirements – everything is a requirement really – including the requirement. To demonstrate risk reduction.

And those requirements may apply to the system itself, the product. Or they may provide, or they may apply to the process that generates the evidence or the evidence. Putting all those things together in an organized and orderly way really is the essence of system safety, this is where we are addressing safety in a systematic way, in an orderly way. In an organized way. (Those words will keep coming back). That’s the essence of system safety, as opposed to the day-to-day task of keeping a workplace safe.

Maybe by mopping up spills and providing handrails, so people don’t slip over. Things like that. We’re talking about a more sophisticated level of safety. Because we have a more complex problem a more challenging problem to deal with. That’s system safety. We will start on the process now, and we begin with hazard identification and analysis; first, we need to identify and list the hazards, the Hazards and the accidents associated with the system.

We’ve got a system, physical or not. What could go wrong? We need to think about all the possibilities. And then having identified some hazards we need to start doing some analysis, we follow a process. That helps us to delve into the detail of those hazards and accidents. And to define and understand the accident sequences that could result. In fact, in doing the analysis we will very often identify some more hazards that we hadn’t thought of before, it’s not a straight-through process it tends to be an iterative process.

Risk Reduction

And what ultimately what we’re trying to do is reduce risk, we want a systematic process, which is what we’re describing now. A systematic process of reducing risk. And at some point, we must estimate the risk that we’re left with. Before and after all these controls, these mitigations, are applied. That’s risk estimation.  Again, there’s that systematic word, we’re going to use all the available information to estimate the level of risk that we’ve got left. Recalling that risk is a combination of severity and likelihood.

Now as we get towards the end of the process, we need to evaluate risk against set criteria. And those criteria vary depending on which country you’re operating in or which industry we’re in: what regulations apply and what good practice is relevant. All those things can be a factor. Now, in this case, this is a U.K. standard, so we’ve got two tests for evaluating risk. It’s a systematic determination using all the available evidence. And it should be an objective evaluation as far as we can make it.

Risk Evaluation

We should use certain criteria on whether a risk can be accepted or not. And in the U.K. there are two tests for this. As we’ve said before, there is ALARP, the ‘As Low As is Reasonably Practicable’ test, which says: Have we put into practice all reasonably practicable controls? (To reduce risk, this is risk reduction target). And then there’s an absolute level of risk to consider as well. Because even if we’ve taken all practical measures, the risk remaining might still be so high as to be unacceptable to the law.

Now that test is specific to the U.K, so we don’t have to worry too much about it. The point is there are objective criteria, which we must test ourselves or measure ourselves against. An evaluation that will pop out the decision, as to whether a further risk reduction is necessary if the risk level is still too high. We might conclude that are still reasonably practicable measures that we could take. Then we’ve got to do it.

We have an objective decision-making process to say: have we done enough to reduce risk? And if not, we need to do some more until we get to the point where we can apply the test again and say yes, we’ve done enough. Right, that’s rather a long-winded way of explaining that. I apologize, but it is a key issue and it does trip up a lot of people.

Risk Acceptance

Now, once we’ve concluded that we’ve done enough to reduce risk and no further risk reduction is necessary, somebody should be in a position to accept that risk.  Again, it’s a systematic process, by which relevant stakeholders agree that risks may be accepted. In other words, somebody with the right authority has said yes, we’re going to go ahead with the system and put it into practice, implement it. The resulting risks to people are acceptable, providing we apply the controls.

And we accept that responsibility.  Those people who are signing off on those risks are exposing themselves and/or other people to risk. Usually, they are employees, but sometimes members of the public as well, or customers. If you’re going to put customers in an airliner you’re saying yes there is a level of risk to passengers, but that the regulator, or whoever, has deemed [the risk] to be acceptable. It’s a formal process to get those risks accepted and say yes, we can proceed. But again, that varies greatly between different countries, between different industries. Depending on what regulations and laws and practices apply. (We’ll talk about different applications in another section.)

Risk Management

Now putting all this together we call this risk management.  Again, that wonderful systematic word: a systematic application of policies, procedures and practices to these tasks. We have hazard identification, analysis, risk estimation, risk evaluation, risk reduction & risk acceptance. It’s helpful to demonstrate that we’ve got a process here, where we go through these things in order. Now, this is a simplified picture because it kind of implies that you just go through the process once.

With a complex system, you go through the process at least once. We may identify further hazards, when we get into Hazard Analysis and estimating risk. In the process of trying to do those things, even as late as applying controls and getting to risk acceptance. We may discover that we need to do additional work. We may try and apply controls and discover the controls that we thought were going to be effective are not effective.

Our evaluation of the level of risk and its acceptability is wrong because it was based on the premise that controls would be effective, and we’ve discovered that they’re not, so we must go back and redo some work. Maybe as we go through, we even discover Hazards that we hadn’t anticipated before. This can and does happen, it’s not necessarily a straight-through process. We can iterate through this process. Perhaps several times, while we are moving forward.

Safety Management

OK, Safety Management. We’ve gone to a higher level really than risk because we’re thinking about requirements as well as risk. We’re going to apply organization, we’re going to applying management principles to achieve safety with high confidence. For the first time we’ve introduced this idea of confidence in what we’re doing. Well, I say the first time, this is insurance isn’t it? Assurance, having justified confidence or appropriate confidence, because we’ve got the evidence. And that might be product evidence too we might have tested the product to show that it’s safe.

We might have analysed it. We might have said well we’ve shown that we follow the process that gives us confidence that our evidence is good. And we’ve done all the right things and identified all the risks.  That’s safety management. We need to put that in a safety management system, we’ve got a defined organization structure, we have defined processes, procedures and methods. That gives us direction and control of all the activities that we need to put together in a combination. To effectively meet safety requirements and safety policy.

And our safety tests, whatever they might be. More and more now we’re thinking about top-level organization and planning to achieve the outcomes we need. With a complex system, with a complex operating environment and a complex application.

Safety Planning

Now I’ll just mention planning. Okay, we need a safety management plan that defines the strategy: how we’re going to get there, how are we going to address safety. We need to document that safety management system for a specific project. Planning is very important for effective safety. Safety is very vulnerable to poor planning. If a project is badly planned or not planned at all, it becomes very difficult to Do safety effectively, because we are dependent on the process, on following a rigorous process to give us confidence that all results are correct.  If you’ve got a project that is a bit haphazard, that’s not going to help you achieve the objectives.

Planning is important. Now the bit of that safety plan that deals with timescales, milestones and other date-related information. We might refer to as a safety program. Now being a UK Definition, British English has two spellings of program. The double-m-e version of programme. Applies to that time-based progression, or milestone-based progression.

Whereas in the US and in Australia, for example, we don’t have those two words we just have the one word, ‘program’. Which Covers everything: computer programs, a programme of work that might have nothing to do with or might not be determined by timescales or milestones. Or one that is. But the point is that certain things may have to happen at certain points in time or before certain milestones. We may need to demonstrate safety before we are allowed to proceed to tests and trials or before we are allowed to put our system into service.

Demonstrating Safety

We’ve got to demonstrate that Safety has been achieved before we expose people to risk.  That’s very simple. Now, finally, we’re almost at the end. Now we need to provide a demonstration – maybe to a regulator, maybe to customers – that we have achieved safety.  This standard uses the concept of a safety case. The safety case is basically, imagine a portfolio full of evidence.  We’ve got a structured argument to put it all together. We’ve got a body of the evidence that supports the argument.

It provides a Compelling, Comprehensible (or understandable) and valid case that a system is safe. For a given application or use, in a given Operating environment.  Really, that definition of what a safety case is harks back to that meaning of safety.  We’ve got something that really hits the nail on the head. And we might put all of that together and summarise it in a safety case report. That summarises those arguments and evidence, and documents progress against the Safe program.

Remember I said our planning was important. We started off saying that we need to do this, that the other in order to achieve safety. Hopefully, in the end, in the safety report we’ll be able to state that we’ve done exactly that. We did do all those things. We did follow the process rigorously. We’ve got good results. We’ve got a robust safety argument. With evidence to support it. At the end, it’s all written up in a report.

Documenting Safety

Now that isn’t always going to be called a safety case report; it might be called a safety assessment report or a design justification report. There are lots of names for these things. But they all tend to do the same kind of thing, where they pull together the argument as to why the system is safe. The evidence to support the argument, document progress against a plan or some set of process requirements from a standard or a regulator or just good practice in an industry to say: Yes, we’ve done what we were expected to do.

The result is usually that’s what justifies [the system] getting past that milestone. Where the system is going into service and can be used. People can be exposed to those risks, but safely and under control.

Everyone’s a winner, as they say!

Copyright – Creative Commons Licence

Okay. I’ve used a lot of information from a UK government website. I’ve done that in accordance with the terms of its creative commons license, and you can see more about that here. We have we complied with that, as we are required to, and to say to you that the information we’ve supplied is under the terms of this license.

Safety Concepts Part 2: More Resources

And for more resources and for more lessons on system safety. And other safe topics. I invite you to visit the safety artisan.com website  Thanks very much for watching. I hope you found that useful.

We’ve covered a lot of information there, but hopefully in a structured way. We’ve repeated the key concepts and you can see that in that standard. The key concepts are consistently defined, and they reinforce each other. In order to get that systematic, disciplined approach to safety, that’s we need.

Anyway, that’s enough from me. I hope you enjoyed watching and found that useful. I look forward to talking to you again soon. Please send me some feedback about what you thought about this video and also what you would like to see covered in the future.

Thank you for visiting The Safety Artisan. I look forward to talking to you again soon. Goodbye.

Safety Concepts Part 1 defines the meaning of ‘Safe’, and it is free. Return to the Start Here Page.

Categories
Start Here System Safety

System Safety Principles

In this 45-minute video, I discuss System Safety Principles, as set out by the US Federal Aviation Authority in their System Safety Handbook. Although this was published in 2000, the principles still hold good (mostly) and are worth discussing. I comment on those topics where modern practice has moved on, and those jurisdictions where the US approach does not sit well.

This is the ten-minute preview of the full, 45-minute video.

System Safety Principles: Topics

  • Foundational statement
  • Planning
  • Management Authority
  • Safety Precedence
  • Safety Requirements
  • System Analyses Assumptions & Criteria
  • Emphasis & Results
  • MA Responsibilities
  • Software hazard analysis
  • An Effective System Safety Program

System Safety Principles: Transcript

Click here for the Transcript

Hello and welcome to The Safety Artisan where you will find professional pragmatic and impartial educational products. I’m Simon and it’s the 3rd of November 2019. Tonight I’m going to be looking at a short introduction to System Safety Principles.

Introduction

On to system safety principles; in the full video we look at all principles from the U.S. Federal Aviation Authority’s System Safety Handbook but in this little four- or five-minute video – whatever it turns out to be – we’ll take a quick look just to let you know what it’s about.

Topics for this Session

These are the subjects in the full session. Really a fundamental statement; we talk about planning; talk about the management authority (which is the body that is responsible for bringing into existence -in this case- some kind of aircraft or air traffic control system, something like that, something that the FAA would be the regulator for in the US). We talk about safety precedents. In other words, what’s the most effective safety control to use. Safety requirements; system analyses – which are highlighted because that’s just the sample I’m going to talk about, tonight; assumptions and safety criteria; emphasis and results – which is really about how much work you put in where and why; management authority responsibilities; a little aside of a specialist area – software hazard analysis; And finally, what you need for an effective System Safety Program.

Now, it’s worth mentioning that this is not an uncritical look at the FAA handbook. It is 19 years old now so the principles are still good, but some of it’s a bit long in the tooth. And there are some areas where, particularly on software, things have moved on. And there are some areas where the FAA approach to system safety is very much predicated on an American approach to how these things are done.  

Systems Analysis

So, without further ado, let’s talk about system analysis. There are two points that the Handbook makes. First of all, that these analyses are basic tools for systematically developing design specifications. Let’s unpack that statement. So, the analyses are tools- they’re just tools. You’ve still got to manage safety. You’ve still got to estimate risk and make decisions- that’s absolutely key. The system analyses are tools to help you do that. They won’t make decisions for you. They won’t exercise authority for you or manage things for you. They’re just tools.

Secondly, the whole point is to apply them systematically. So, coverage is important here- making sure that we’ve covered the entire system. And also doing things in a thorough and orderly fashion. That’s the systematic bit about it. And then finally, it’s about developing design specifications. Now, this is where the American emphasis comes in. But before we talk about that, it’s fundamental to note that really we need to work out what our safety requirements are. What are we trying to achieve here with safety? And why? And those are really important concepts because if you don’t know what you’re trying to achieve then it will be very difficult to get there and to demonstrate that you’ve got there- which is kind of the point of safety. And putting effort into getting the requirements right is very important because without doing that first step all your other work could be invalid. And in my experience of 20 plus years in the business, if you don’t have a really precise handle on what you’re trying to achieve then you’re going to waste a lot of time and money, probably.

So, onto the second bullet point. Now the handbook says that the ultimate measure of safety is not the scope of analysis but in satisfying requirements. So, the first part – very good. We’re not doing analysis for the sake of it. That’s not the measure of safety – that we’ve analyzed something to death or that we’ve expended vast amounts of dollars on doing this work but that we’ve worked out the requirements and the analysis has helped us to meet them. That is the key point.

This is where it can go slightly pear-shaped in that this emphasis on requirements (almost to the exclusion of anything else) is a very U.S.-centric way of doing things. So, very much in the US, the emphasis is you meet the spec, you certify that you’ve met spec and therefore we’re safe. But of course what if the spec is wrong? Or what if it’s just plain inappropriate for a new use of an existing system or whatever it might be?

In other jurisdictions, notably the U.K. (and as you can tell from my accent that’s where I’m from,  I’ve got a lot of experience doing safety work in the U.K. but also Australia where I now live and work) it’s not about meeting requirements. Well, it is but let me explain. In the UK and Australia, English law works on the idea of intent. So, we aim to make something safe: not whether it has that it’s necessarily met requirements or not, that doesn’t really matter so much, but is the risk actually reduced to an acceptable level? There are tests for deciding what is acceptable. Have you complied with the law? The law outside the US can take a very different approach to “it’s all about the specification”.

Of course, those legal requirements and that requirement to reduce risk to an acceptable level, are, in themselves, requirements. But in Australian or British legal jurisdiction, you need to think about those legal requirements as well. They must be part of your requirements set. So, just having a specification for a technical piece of cake that ignores the requirements of the law, which include not only design requirements but the thing is actually safe in service and can be safely introduced, used, disposed of, etc. If you don’t take those things into account you may not meet all your obligations under that system of law. So, there’s an important point to understanding and using American standards and an American approach to system safety out of the assumed context. And that’s true of all standards and all approaches but it’s a point I bring out in the main video quite forcefully because it’s very important to understand.

Copyright Statement

So, that’s the one subject I’m going to talk about in this short video. I’d just like to mention that all quotations are from the FAA system safety handbook which is copyright free but the content of this video presentation, including the added value from my 20 plus years of experience, is copyright of the Safety Artisan.

For More…

And wherever you’re seeing this video, be it on social media or whatever, you can see the full version of the video and all other videos at The Safety Artisan.

End

That’s the end of the show. It just remains to me to say thanks very much for giving me your time and I look forward to talking to you again soon. Bye-bye.

Back to the Start Here Page.

Categories
Start Here System Safety

Safety Concepts Part 1

In ‘Safety Concepts Part 1’, the Safety Artisan looks at the meaning of the term “safe”. We look at an objective definition of safe – objective because it can be demonstrated to have been met. This fundamental topic provides the foundation for all other safety topics, and it isn’t complex. The basics are simple, but they need to be thoroughly understood and practiced consistently to achieve success.

System Safety Concepts – a Short Introduction

Safety Concepts Part 1: Topics

  • A practical (useful) definition of ‘safe’:
    • What is risk?
    • What is risk reduction?
    • What are safety requirements?
  • Scope:
    • What is the system?
    • What is the application (function)?
    • What is the (operating) environment?

Safety Concepts Part 1: Transcript

Click Here for the Transcript

Hi everyone and welcome to the Safety Artisan, where you will find professional, pragmatic and impartial advice. Whether you want to know how safety is done or how to do it, I hope you’ll find today’s session helpful.

It’s the 21st of September 2019 as I record this. Welcome to the show. So, let’s get started. We’re going to talk today about System Safety concepts. What does it all mean?  We need to ask this question because it’s not obvious, as we will see.

If we look at a dictionary definition of the word ‘safe’, it’s an adjective: to be protected from or not exposed to danger or risk. Not likely to be harmed or lost. There are synonyms – protect, shield, shelter, guard, and keep out of harm’s way. They’re all good words, and I think we all know what we’re talking about. However, as a definition, it’s too imprecise. We can’t objectively say whether we have achieved safety or not.

A Practical Definition of ‘Safe’

What we need is a better definition, a more practical definition. I’ve taken something from an old UK Defence Standard. Forget about which standard, that’s not important. It’s just that we’re using a consistent set of definitions to work through basic safety concepts. And it’s important to do that because different standards, come from different legal systems and they have different philosophies. So, if you start mixing standards and different concepts together, that doesn’t always work.

OK so whatever you do, be consistent. That’s the key point. We’re going to use this set of definitions from the UK Defence Standard because they are consistent.

In this standard, ‘safe’ means: “Risk has been demonstrated to have been reduced to a level that is ALARP, and broadly acceptable or tolerable. And relevant prescriptive safety requirements have been met. For a system, in a given application, in a given Operating Environment.” OK, so let’s unpack that.

System Safety – Risk

So, we start with risk. We need to manage risk. We need to show that risk has been reduced to an acceptable level. As required perhaps by law, or regulation or a standard. Or just good practice in a particular industry. Whatever it is, we need to show that the risk of harm to people has been reduced. Not just any old reduction, we need to show that it’s been reduced to a particular level. Now in this standard, there are two tests for that.

And they’re both objective tests. The first one says as low as reasonably practicable. Basically, it’s asking have all reasonably practicable risk reduction measures been taken. So that’s one test. And the second test is a bit simpler. It’s basically saying reduce the absolute level of risk to something that is tolerable or acceptable. Now don’t worry too much about precisely what these things mean. The purpose for today is to note that we’ve got an objective test to say that we’ve done enough.

System Safety – Requirements

So that’s dealt with risk. Let’s move on to safety requirements. If a requirement is relevant, then we need to apply it. If it’s prescriptive, if it says you must do this, or you must do that. Then we need to meet it. There are two separate parts to this ‘Safe’ thing: we’ve got to meet requirements; and, we’ve got to manage risk. We can’t use one as an excuse for not doing the other.

So just because we reduce risk until it’s tolerable or acceptable doesn’t mean that we can ignore safety requirements. Or vice versa. So those are the two key things that we’ve got to do. But that’s not actually quite enough to get us there. Because we’ve got to define what we’re doing, with what and in what context. Well, we’re reducing the risk of a system. And the system might be a physical thing.

Defining the Scope: The System

It might be a vehicle, an aeroplane or a ship or a submarine, it might be a car or a truck. Or it might be something a bit more intangible. It might be a computer program that we’re using to make decisions that affect the safety of human beings, maybe a medical diagnosis system. Or we’re processing some scripts or prescriptions for medicine and we’ve got to get it right. We could poison somebody. So, whether it’s a tangible or an intangible system.

We need to define it. And that’s not as easy as it sounds, because if we’re applying system safety, we’re doing it because we have a complex system. It’s not a toaster. It’s something a bit more challenging. Defining the system carefully and precisely is really important and helpful. So, we define what our system is, our thing or our service. The system. What are we doing with it? What are we applying it to?

Defining the Scope: The Application

What are we using it for? Now, just to illustrate that no standard is perfect. Whoever wrote that defence standard didn’t bother to define the application. Which is kind of a major stuff-up to be honest, because that’s really important. So, let’s go back to an ordinary dictionary definition just to get an idea of what it means. By the way, I checked through the standard that I was referring to, and it does not explain in this standard.

What it means by the application. Otherwise, I would use that by preference. But if we go back to the dictionary, we see application: the act of putting something into operation. OK, so, we’re putting something to use. We’re implementing, employing it or deploying it maybe we’re utilizing it, applying it, executing it, enacting it. We’re carrying it out, putting it into operation or putting it into practice. All useful words that help us to understand.

I think we know what we’re talking about. So, we’ve got a thing or a service. Well, what are we using it for? Quite obviously, you know a car is probably going to be quite safe on the road. Put it in water and it probably isn’t safe at all. So, it’s important to use things for their proper application, to the use to which they were designed. And then, kind of harking back to what I just said, the correct operating environment.

Defining the Scope: The Operating Environment

For this system, and the application to which we will put it to. So, we’ve got a thing that we want to use for something. What’s the operating environment in which it will be safe? What’s it qualified or certified for? What’s the performance envelope that it’s been designed for? Typically, things work pretty well within the operating environment, within the envelope for which they were designed. Take them outside of that envelope and they perform not so well.

Maybe not at all. You take an aeroplane too high and the air is too thin, and it becomes uncontrollable. You take it too low and it smashes into the ground. Neither outcome is particularly good for the occupants of the aeroplane. Or whoever happens to be underneath it when it hits the ground. All of those three things:  what is the system? What are we doing with it? and where are we doing it? All those things have to be defined. Otherwise, we can’t really say that risk has been dealt with, or that safety requirements have been met.

System Safety: why Bother?

So, we’ve spent several slides just talking about what safe means, which might seem a bit over the top. But I promise you it is not, because having a solid understanding of what we’re trying to do is important in safety. Because safety is intangible. So, we need to understand what it is we’re aiming for. As some Greek bloke said, thousands of years ago: “If you don’t know to which port, you are bound, then no wind is favourable.”

It’s almost impossible to have a satisfactory Safety Program if you don’t know what you’re trying to achieve. Whereas, if you do have a precise understanding of what you’re trying to achieve, you’ve got a reasonably good chance of success. And that’s what it’s all about.

Copyright Statement

Well, I’ve quoted you some information. From a UK government web site. And I’ve done so. In accordance with the terms of its creative commons license and you can see. More information about the terms of that can be found at this page.

End: Safety Concepts Part 1

If you want more, if you want to unpack all the Major Definitions, all the system safety concepts that we’re talking about, then there’s the second part of this video, which you can see here.

I hope you enjoy it. Well, that’s it for the short video, for now. Please go and have a look at the longer video to get the full picture. OK, everyone, it’s been a pleasure talking to you and I hope you found that useful. I’ll see you again soon. Goodbye.

Back to the Start Here Page.