Categories
Blog Safety Management

FAQ on Risk Management

In this FAQ on Risk Management, I will point you to some lessons where you will get some answers to basic questions.

Lessons on this Topic

Welcome to Risk Management 101, where we’re going to go through these basic concepts of risk management. We’re going to break it down into the constituent parts and then we’re going to build it up again and show you how it’s done.

So what is this risk analysis stuff all about? What is ‘risk’? How do you define or describe it? How do you measure it? In Risk Basics I explain the basic terms.

Risk Analysis Programs – Design a program for any system in any application. You’ll be able to:

  • Describe fundamental risk concepts;
  • Define what a risk analysis program is;
  • and much more…

If you don’t find what you want in this FAQ on Risk Management, there are plenty more lessons under Start Here and System Safety Analysis topics. Or just enter ‘risk’ into the search function at the bottom of any page.

The Common Risk Management Questions

Click here to see the most Commonly-asked Questions

why risk management, why risk management is important, why risk management is important in project management, why risk management plan is important, why risk management is important for business, why risk management matters, are risk management, are risk management services, is risk management important, is risk management framework, is risk management effective, can risk management be outsourced, can risk management increase risk, can risk management create value, how can risk management help companies, how can risk management be improved, how can risk management improve performance, how risk management improve organization performance, how risk management works, how risk management help you, how risk management helps, how risk management plans can be monitored, how risk management help us, how risk management add value to a firm, how risk management developed, what risk management do, what risk management means, what risk management is, what risk management is not, where risk management, which risk management certification is best, which risk management principle is best demonstrated, which risk management technique is considered the best, which risk management handling technique is an action, which risk management techniques, who risk management guidelines, who risk management, who risk management framework, who risk management tool, who risk management plan, who risk management strategies, will risk management be automated, how will risk management help you, how will this risk management plan be monitored, risk management will reduce, risk management will

Categories
Behind the Scenes

Q&A: Reflections on a Career in Safety

Now we move on to Q&A: ‘Reflections on a Career in Safety’.

Q&A Session | Q&A Session | Q&A Session | Q&A Session

How do you Keep People Engaged with Safety?

Q.           I was thinking of an idea as I was walking here, and you did mention just in your slide about going with the flow that sometimes people who stop listening to you I’ve seen a lot of people come up with safety systems where there’s a lot of forms and paperwork to fill out. And a lot of the people who are doing it just go. It’s just paperwork. It doesn’t do anything for safety. It’s somebody else covering their butt.

Whereas what when I look at them, what they are is almost a prompt to get people to think about the things that can bite them. Yeah. Keep that idea of what’s in front of them in their heads rather than letting that go into the. Is just paperwork for paperwork’s sake. Yeah. How do you keep them engaged in using that as a tool rather than a liability reduction?

A.           Yeah, I think, first of all, there’s got to be a bit of education. They’ve got to understand that they’re dealing with things that are potentially dangerous. I mean, that’s required anyway. You’ve got to warn users and them the information that they need. But I think mostly it’s about how you engage with people. If you show if you sell it to them, there’s a benefit to doing this. And you talk in a language that they understand you’re much more likely to get listened to.

I’ve been to lots of places where people have had awful procedures that don’t help them get the job done, it’s slow and clunky and they often get ignored. So the trick is to try and make the procedures as helpful to get the job done as possible. And of course, if you can build in safety so people don’t have to follow so many procedures, that’s even better. If they physically can’t do something dangerous, then that’s great.

That’s much more effective than procedures anyway. But it is all about speaking the user’s language. So, [for example] I learned that with pilots, pilots have got a particular way of thinking and you can give them a rule that says don’t do this, but it might not actually make any sense in that context. So you’ve got to understand what their context is. You can they can only follow a rule if it’s based on information that’s actually available to them.

So you can say, don’t go below 10,000 feet while doing this or don’t exceed the speed. Otherwise, the wings might fall off. That that they understand. If you gave them a load of technical garble about stuff, they probably wouldn’t pay much attention.

That said, you do sometimes have to tell people the bleeding obvious because I remember a known British pilot took off in a plane where the fuel warnings were showing on the wing tanks, but the pilot still took the plane and then got in the air and no fuel was coming out of the wing. So he had to land the plane pretty quickly before it ran out of fuel. And I was going to bring in some advice to our pilots to say: don’t do that. If you see the yellow stripes on the wings, that’s a bad scene on the display. That’s a bad sign. And somebody said to me, oh, no British pilot would be stupid enough to do that. And like a fool, I believed him.

So they did do that. And then right now we’re having the rule that says, don’t do that, because it was needed. So there’s always a fine balance is a bit of give and take. Thanks for the questions. Yeah. Anyone else, anyone else?

Which Project was Most Influential on Your Life?

Q.           If you can share what’s one of the projects that you worked on that was probably the most influential in your life or that you thought was definitely helpful for where you are now.

A.           That’s a really good question. Well, I suppose the big one in my life was Eurofighter because I spent 13 years, on and off, on Eurofighter and I got to work with some fantastic people; in theory, I was their manager. But in reality, they knew 100 times as much about the subject as I did and I learned a lot from them.

So, yeah, I would say because of that, the sheer number of people. But there were lots of jobs where I got a lot out of it professionally or personally … But yeah, I think it’s the people, wherever you are.

I’ve seen a lot of teams. They’ve got terrible workplace conditions, work in an old dilapidated building. They haven’t got enough spares. They haven’t got enough tools or anything. Everything is against them. But if they’re a good bunch of people, they’ll still achieve great things and enjoy doing it.

How do you Make a Safety System Responsive?

Q.           OK. Oh, so you’re talking about these very complicated systems where you permitted people to do work so really planned because they’re so difficult. You’ve really planned how work has to happen. But the things that you’re working on, stuff that theoretically most operations at the moment are small arms and things, but people can shoot holes in the things that you’re working on. And if the 10,000 tanks come over, then you’ve got potentially a lot more holes all of a sudden.

How do you go from that very regimented system and then work out how to make it also really, really fast and responsive to something that keeps throwing up problems at a much higher rate than I’d imagine you can fill out the forms to give permission to the person to do the work as is the usual practice.

A.           So you’re using the same system over and over and over again. And people will spend years using the same system, maybe on the same equipment or the same plane or whatever it is.

So people are well-practiced. Another technique is if people are overtrained and they got lots of experience, then they can often cope in adverse circumstances. So sometimes you just have to cut corners in order to get a job done. And it’s having the experience and the knowledge to do that safely and still get the result you need that, that’s the judgment side. That’s the stuff that you can’t write down. But mostly it’s through practice.

So, we would follow a very regimented process. But once you’ve done it enough times, it became second nature.  It’s like training an athlete. Once you’ve got the regular way of doing things down pat, it then becomes a lot easier to spot when you’ve got to do something a bit different and cope with it.

Q&A: How do you Determine Safety Requirements? How do you Detect Safety Issues in Software?

Q.           So I’ll try and combine these because the time’s getting on and I’ve got a lot of questions, you’re talking about safety and software and safety being an emergent phenomenon, and you’re not necessarily going to know that something you do in software is going to cause. An issue with the typhoon is very software-controlled aircraft, so the computer says is close to what’s going to happen over the pilot in a lot of ways. You also talked about putting safety into requirements.

Some requirements may or may not like you could have a direct safety requirement, but there could be other requirements that can impact safety without it being explicit. Yeah, how do you detect that in a set of system or user requirements? And how do you detect safety issues in software systems that look like they’re doing what they’re supposed to do?

A.           Yeah, so do the requirements bit first.  Sometimes you get a bunch of requirements and you’ve just got to go through them and look for safety implications. Sometimes it’s really obvious like the customer says, I want this safety system installed in my ship.  The ship has got to be built in accordance with certain rules, class rules, or whatever they might be. And you go, OK, a lot of that will be safety-related.

And sometimes you’ve got to do some work. You’ve got to decompose the requirements and look at how are you going to solve the problem and go, OK, the requirements are pushing us to have this high-energy system in my ship. OK, there are safety issues with managing that and making sure it doesn’t get out of control. So sometimes it only emerges after you’ve done further work after you’ve kind of decomposed your initial requirements.

But if the people doing the requirements, you might have systems engineers on the client-side and on the provider side. If they’re doing their job well, they’re processing the requirements. And these things will tend to emerge quite well. If you’ve got good systems engineering. So that’s that one.

The software one, it all depends on how safe or how dependable you want the software to be.  Ultimately, the Eurofighter had a software-controlled flight control computer, and the aircraft in certain aspects was unstable. So the pilot could not fly it without the computer. So that’s as tough as it gets in terms of software safety, the computer cannot fail. OK, and to achieve that level of safety, the state of the art at the time was going through the source code in forensic detail, nailing down the compiler so that it was only allowed to do very basic things.

And then you produce the object code and then you go through the object code in forensic detail and then test it to death. So lots and lots of processes applied and there were still errors in the software because there always will be because there are so many. But you can at least say none of these errors will result in an unsafe outcome, provided, of course, that you’ve got a sufficiently detailed specification to say this is safe [whereas] and this is not OK.

So if you’re if you’ve got to go to that level of detail, you can forensically go through things. And then there are if you’ve heard of Safety Integrity Levels (SILs) or safety integrity requirements for different cells or different says, you can have a cookbook approach where you use different techniques. Usually, the toughest SIL is the state of the art at the time that the standard was created. That’s very crudely how you do it, and hopefully, you’ve got some competent people as well.

Host: Thank you. Thank you so much for sharing your time with us and explaining your journey through safety. Something that I think was interesting is that you raised it here. 

How do you Deal with People Using Stuff in Ways it Wasn’t Designed for?

Q.           I understand people’s motivation, the context of people’s motivations for using the equipment. And people might use it in ways that you don’t even dream of. Right, you might have designed something to do this or something. And then people stand on it to reach something else, that kind of thing, isn’t it?  I think when you move from being at university and going into industry and seeing how the equipment is actually used, you can blow your mind sometimes. Yeah.

A.           Yeah. Even people who had worked in the Ministry of Defence [for years], my boss was horrified at the idea that the Air Force would fly a plane that wasn’t totally serviceable. And to me, that was completely routine.  None of them worked totally as intended. There were some features that we just disabled all the time.

Host.     So, yes, that is also something that blows your mind.  Oh, thank you very much, Simon. Thank you and thank you, kind audience. Thanks for your participation.

Q&A Session | Q&A Session | Q&A Session | Q&A Session

This was part of a lecture to the University of Adelaide SEIP Course. You can the other sessions, as follows:

So that was ‘Reflections on a Career in Safety: Q&A’. Did you find it useful?

Categories
Behind the Scenes

Reflections on a Career in Safety, Part 5

In ‘Reflections on a Career in Safety, Part 5’, I finally get around to reflecting on personal lessons learned from my own career.

Reflecting on a Career in Safety

Very briefly, I just wanted to pick out three things.

Learning and Practice

First, at university in my first degree and in my master’s degree and in studies I’ve done since then (because you never stop learning) you pick up a theoretical framework, which is fantastic.  You learn to understand things from an abstract point of view or a theoretical point of view.

But there’s also practical experience, and the two complement each other. You can [start] a job. You’re usually doing the same thing over and over again. So you become very competent in that narrow area. But if you don’t have the theoretical framework to put it in, you’ve got all of these jewels of experience, but you can’t understand where they fit in in the big picture.

Wilhelmshaven, Picture by S. Di Nucci

And so that’s what your course here does. Whatever courses you do in the future, whatever learning you do in the future, the two complement each other, and actually they work together. Whether I turn up and I understand something from a theoretical point of view, or I’ve actually done it and learned the hard way (usually doing it the hard way is painful), the two are complementary and they’re [both] very useful to help you in your career.

Opportunism and Principles

Second, you’ve heard me say a couple of times I got into software by accident. I got into safety by accident. And it’s all true. An opportunity comes up and you’ve got to grab it either because you think, well, maybe this opportunity won’t come again or you’re trying to get out of a job that you don’t like or avoid doing something you don’t want to do, whatever it might be.

If you have an opportunity, I would say grab it, go for it, be positive and say yes to as many things as you can. And, if I dare to give you some career advice, it would be that.

Photo by Aziz Acharki on Unsplash

But also, in safety, we’ve got to stick to our principles. And sometimes as a safety engineer or an engineer who does safety, you’re going to have to stick to something that costs you, whether it be a promotion or, whether people no longer listen to you because you said, “no, we can’t do that” when it’s something that they really want to do.

You have to understand the difference between things that matter and things that don’t. So if you end up in safety, if you’re working with the safety of people, [you must] learn the things that cannot be negotiated.  There are certain requirements in the law and regulations, but they’re often not as onerous as people think. They’re often a lot simpler than people think. So understand: what has to be done and what is optional?  What is merely beneficial. And then you can make a sound judgment.

Simplicity

The final point. Einstein once famously said that if you can’t explain something in simple terms, then you don’t really understand it. And what you and I will all be doing for years to come is dealing with complexity, big projects, politics. A technical challenge, with not enough time to do something, not enough budget to do something. So lots of challenges.

I think it’s always a struggle to reduce [a problem] to something simple that you can understand and think: right, this is the essential point that we need to keep hold of. Everything else is kind of fluff and distraction.

So I would say my career in safety has been a constant effort to simplify and to understand the simple things that are important. And that’s what we need to stick to. And again, all of you, whether you do safety or not, you’re going to be dealing with complex systems. Otherwise, we’re not needed as systems engineers.

‘Decomposed’ F1 Racing Car, Brooklands. Photo Simon Di Nucci.

Q&A (Part 6) will follow next week!

New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.

Categories
System Safety

Reflections on a Career in Safety, Part 4

In ‘Reflections on a Career in Safety, Part 4’, I want to talk about Consultancy, which is mostly what I’ve been doing for the last 20 years!

Consultancy

As I said near the beginning, I thought that in the software supportability team, we all wore the same uniform as our customers. We didn’t cost them anything. We were free. We could turn up and do a job. You would think that would be an easy sell, wouldn’t you?

Not a bit of it.  People want there to be an exchange of tokens. If we’re talking about psychology, if something doesn’t cost them anything, they think, well, it can’t be worth anything. So [how much] we pay for something really does affect our perception of whether it’s any good.

Photo by Cytonn Photography on Unsplash

So I had to go and learn a lot of sales and marketing type stuff in order to sell the benefits of bringing us in, because, of course, there was always an overhead of bringing new people into a program, particularly if they were going to start asking awkward questions, like how are we going to support this in service? How are we going to fix this? How is this going to work?

So I had to learn a whole new language and a whole new way of doing business and going out to customers and saying, we can help you, we can help you get a better result. Let’s do this. So that was something new to learn. We certainly didn’t talk about that at university.  Maybe you do more business focussed stuff these days. You can go and do a module, I don’t know, in management or whatever; very, very useful stuff, actually. It’s always good to be able to articulate the benefits of doing something because you’ve got to convince people to pay for it and make room for it.

Doing Too Little, or Too Much

And in safety, I’ve got two [kinds of] jobs.

First of all, I suppose it’s the obvious one. Sometimes you go and see a client, they’re not aware of what the law says they’re supposed to do or they’re not aware that there’s a standard or a regulation that says they’ve got to do something – so they’re not doing it. Maybe I go along and say, ah, look, you’ve got to do this. It’s the law. This is what we need to do.

Photo by Quino Al on Unsplash

Then, there’s a negotiation because the customer says, oh, you consultants, you’re just making up work so you can make more money. So you’ve got to be able to show people that there’s a benefit, even if it’s only not going to jail. There’s got to be a benefit. So you help the clients to do more in order to achieve success.

You Need to Do Less!

But actually, I spend just as much time advising clients to do less, because I see lots of clients doing things that appear good and sensible. Yes, they’re done with all the right motivation. But you look at what they’re doing and you say, well, this you’re spending all this money and time, but it’s not actually making a difference to the safety of the product or the process or whatever it is.

You’re chucking money away really, for very little or no effect.  Sometimes people are doing work that actually obscures safety. They dive into all this detail and go, well, actually, you’ve created all this data that’s got to be managed and that’s actually distracting you from this thing over here, which is the thing that’s really going to hurt people.

So, [often] I spend my time helping people to focus on what’s important and dump the comfort blanket, OK, because lots of times people are doing stuff because they’ve always done it that way, or it feels comforting to do something. And it’s really quite threatening to them to say, well, actually, you think you’re doing yourself a favor here, but it doesn’t actually work. And that’s quite a tough sell as well, getting people to do less.

Photo by Prateek Katyal on Unsplash

However, sometimes less is definitely more in terms of getting results.

Part 5 will follow next week!

New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.

Categories
System Safety

Reflections on a Career in Safety, Part 3

In ‘Reflections on a Career in Safety, Part 3’ I continue talking about different kinds of Safety, moving onto…

Projects and Products

Then moving on to the project side, where teams of people were making sure a new aeroplane, a new radio, a new whatever it might be, was going to work in service; people were going to be able to use it, easily, support it, get it replaced or repaired if they had to. So it was a much more technical job – so lots of software, lots of people, lots of process and more people.

Moving to the software team was a big shock to me. It was accidental. It wasn’t a career move that I had chosen, but I enjoyed it when I got there.  For everything else in the Air Force, there was a rule. There was a process for doing this. There were rules for doing that. Everything was nailed down. When I went to the software team, I discovered there are no rules in software, there are only opinions.

The ‘H’ is software development is for ‘Happiness’

So straight away, it became a very people-focused job because if you didn’t know what you were doing, then you were a bit stuck.  I had to go through a learning curve, along with every other technician who was on the team. And the thing about software with it being intangible is that it becomes all about the process. If a physical piece of kit like the display screen isn’t working, it’s pretty obvious. It’s black, it’s blank, nothing is happening. It’s not always obvious that you’ve done something wrong with software when you’re developing it.

So we were very heavily reliant on process; again, people have got to decide what’s the right process for this job? What are we going to do? Who’s going to do it? Who’s able to do it? And it was interesting to suddenly move into this world where there were no rules and where there were some prima donnas.

Photo by Sandy Millar on Unsplash

We had a handful of really good programmers who could do just about anything with the aeroplane, and you had to make the best use of them without letting them get out of control.  Equally, you had people on the other end of the scale who’d been posted into the software team, who really did not want to be there. They wanted to get their hands dirty, fixing aeroplanes. That’s what they wanted to do. Interesting times.

From the software team, I moved on to big projects like Eurofighter, that’s when I got introduced to:

Systems Engineering

And I have no problem with plugging systems engineering because as a safety engineer, I know [that] if there is good systems engineering and good project management, I know my job is going to be so much easier. I’ve turned up on a number of projects as a consultant or whatever, and I say, OK, where’s the safety plan? And they say, oh, we want you to write it. OK, yeah, I can do that. Whereas the project management plan or where’s the systems engineering management plan?

If there isn’t one or it’s garbage – as it sometimes is – I’m sat there going, OK, my just my job just got ten times harder, because safety is an emergent property. So you can say a piece of kit is on or off. You can say it’s reliable, but you can’t tell whether it’s safe until you understand the context. What are you asking it to do in what environment? So unless you have something to give you that wider and bigger picture and put some discipline on the complexity, it’s very hard to get a good result.

Photo by Sam Moqadam on Unsplash

So systems engineering is absolutely key, and I’m always glad to work with the good systems engineer and all the artifacts that they’ve produced. That’s very important. So clarity in your documentation is very helpful. Being [involved], if you’re lucky, at the very beginning of a program, you’ve got an opportunity to design safety, and all the other qualities you want, into your product. You’ve got an opportunity to design in that stuff from the beginning and make sure it’s there, right there in the requirements.

Also, systems engineers doing the requirements, working out what needs to be done, what you need the product to do, and just as importantly, what you need it not to do, and then passing that on down the chain. That’s very important. And I put in the title “managing at a distance” because, unlike in the operations world where you can say “that’s broken, can you please go and fix it”.

Managing at a Distance

It’s not as direct as that.  You’re looking at your process, you’re looking at the documentation, you’re working with, again, lots and lots of people, not all of whom have the same motivation that you do.

Photo by Bonneval Sebastien on Unsplash

Industry wants to get paid. They want to do the minimum work to get paid, [in order] to maximize their profit. You want the best product you can get. The pilots want something that punches holes in the sky and looks flash and they don’t really care much about much else, because they’re quite inoculated to risk.

So you’ve got people with competing motivations and everything has got to be worked indirectly. You don’t get to control things directly. You’ve got to try and influence and put good things in place, in almost an act of faith that, [you put] good things in place and good things will result.  A good process will produce a good product. And most of the time that’s true. So (my last slide on work), I ended up doing consultancy, first internally and then externally.

Part 4 will follow next week!

New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.

Categories
System Safety

Reflections on a Career in Safety, Part 2

In ‘Reflections on a Career in Safety, Part 2’ I move on to …

Different Kinds of Safety

So I’m going to talk a little bit about highlights, that I hope you’ll find useful.  I went straight from university into the Air Force and went from this kind of [academic] environment to heavy metal, basically.  I guess it’s obvious that wherever you are if you’re doing anything in industry, workplace health and safety is important because you can hurt people quite quickly. 

Workplace Health and Safety

In my very first job, we had people doing welding, high voltage electrics, heavy mechanical things; all [the equipment was] built out of centimeter-thick steel. It was tough stuff and people still managed to bend it. So the amount of energy that was rocking around there, you could very easily hurt people.  Even the painters – that sounds like a safe job, doesn’t it? – but aircraft paint at that time a cyanoacrylate. It was a compound of cyanide that we used to paint aeroplanes with.

All the painters and finishers had to wear head-to-toe protective equipment and breathing apparatus. If you’re giving people air to breathe, if you get that wrong, you can hurt people quite quickly. So even managing the hazards of the workplace introduced further hazards that all had to be very carefully controlled.

Photo by Ömer Yıldız on Unsplash

And because you’re in operations, all the decisions about what kind of risks and hazards you’re going to face, they’ve already been made long before.  Decisions that were made years ago, when a new plane or ship or whatever it was, was being bought and being introduced [into service]. Decisions made back then, sometimes without realizing it, meant that we were faced with handling certain hazards and you couldn’t get rid of them. You just had to manage them as best you could.

Overall, I think we did pretty well. Injuries were rare, despite the very exciting things that we were dealing with sometimes.  We didn’t have too many near misses – not that we heard about anyway. Nevertheless, that [risk] was always there in the background. You’re always trying to control these things and stop them from getting out of control.

One of the things about a workplace in operations and support, whether you’re running a fleet of aeroplanes or you’re servicing some kit for somebody else and then returning it to them, it tends to be quite a people-centric job. So, large groups of people doing the job, supervision, organization, all that kind of stuff.  And that can all seem very mundane, a lot of HR-type stuff. But it’s important and it’s got to be dealt with.

So the real world of managing people is a lot of logistics. Making sure that everybody you need is available to do the work, making sure that they’ve got all the kit, all the technical publications that tell them what to do, the information that they need.  It’s very different to university – a lot of seemingly mundane stuff – but it’s got to be got right because the consequences of stuffing up can be quite serious.

Safe Systems of Work

So moving on to some slightly different topics, when I got onto working with Aeroplanes, there was an emphasis on a safe system of work, because doing maintenance on a very complex aeroplane was quite an involved process and it had to be carefully controlled. So we would have what’s usually referred to as a Permit to Work system where you very tightly control what people are allowed to do to any particular plane. It doesn’t matter whether it’s a plane or a big piece of mining equipment or you’re sending people in to do maintenance on infrastructure; whatever it might be, you’ve got to make sure that the power is disconnected before people start pulling it apart, et cetera, et cetera.

Photo by Leon Dewiwje on Unsplash

And then when you put it back together again, you’ve got to make sure that there aren’t any bits leftover and everything works before you hand it back to the operators because they’re going to go and do some crazy stuff with it. You want to make sure that the plane works properly. So there was an awful lot of process in that. And in those days, it was a paperwork process. These days, I guess a lot would be computerized, but it’s still the same process.

If you muck up the process, it doesn’t matter whether [it is paper-based or not].  If you’ve got a rubbish process, you’re going to get rubbish results and it [computerization] doesn’t change that. You just stuff up more quickly because you’ve got a more powerful tool. And for certain things we had to take, I’ve called it special measures. In my case, we were a strike squadron, which meant our planes would carry nuclear weapons if they had to.

Special Processes for Special Risks

So if the Soviets charged across the border with 20,000 tanks and we couldn’t stop them, then it was time to use – we called them buckets of sunshine. Sounds nice, doesn’t it? Anyway, so there were some fairly particular processes and rules for looking after buckets of sunshine. And I’m glad to say we only ever used dummies. But when you when the convoy arrived and yours truly has to sign for the weapon and then the team starts loading it, then that does concentrate your mind as an engineer. I think I was twenty-two, twenty-three at the time.  

Photo by Oscar Ävalos on Unsplash

Somebody on [our Air Force] station stuffed up on the paperwork and got caught. So that was two careers of people my age, who I knew, that were destroyed straight away, just by not being too careful about what they were doing. So, yeah, that does concentrate the mind.  If you’re dealing with, let’s say you’re in a major hazard facility, you’re in a chemical plant where you’ve got perhaps thousands of tonnes of dangerous chemicals, there are some very special risk controls, which you have to make sure are going to work most of the time.

And finally, there is ‘airworthiness’: decisions about whether we could fly an aeroplane, even though some bits of it were not working. So that was a decision that I got to make once I got signed off to do it. But it’s a team job. You talk to the specialists who say, this bit of the aeroplane isn’t working, but it doesn’t matter as long as you don’t do “that”.

Photo by Eric Bruton on Unsplash

So you have to make sure that the pilots knew, OK, this isn’t working.  This is the practical effect from your [operator’s] point of view. So you don’t switch this thing on or rely on this thing working because it isn’t going to work. There were various decisions about [airworthiness] that were an exciting part of the job, which I really enjoyed.  That’s when you had to understand what you were doing, not on your own, because there were people who’d been there a lot longer than me.  But we had to make things work as best we could – that was life.

Part 3 will follow next week!

New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.

Categories
System Safety

Reflections on a Career in Safety, Part 1

This is Part 1 of my ‘Reflections on a Career in Safety’, from “Safety for Systems Engineering and Industry Practice”, a lecture that I gave to the University of Adelaide in May 2021. My thanks to Dr. Kim Harvey for inviting me to do this and setting it up.

The Lecture, Part 1

Hi, everyone, my name Simon Di Nucci and I’m an engineer, I actually – it sounds cheesy – but I got into safety by accident. We’ll talk about that later. I was asked to talk a little bit about career stuff, some reflections on quite a long career in safety, engineering, and other things, and then some stuff that hopefully you will find interesting and useful about safety work in industry and working for government.

Context: my Career Summary

I’ve got three areas to talk about, operations and support, projects and product development, and consulting.

I have been on some very big projects, Eurofighter, Future Submarine Programme, and some others that have been huge multi-billion-dollar programs, but also some quite small ones as well. They’re just as interesting, sometimes more so. In the last few years, I’ve been working in consultancy. I have some reflections on those topics and some brief reflections on a career in safety.

Starting Out in the Air Force

So a little bit about my career to give you some context. I did 20 years in the Royal Air Force in the U.K., as you can tell from my accent, I’m not from around here. I started off fresh out of university, with a first degree in aerospace systems engineering. And then after my Air Force training, my first job was as an engineering manager on ground support equipment: in General Engineering Flight, it was called.

We had people looking after the electrical and hydraulic power rigs that the aircraft needed to be maintained on the ground. And we had painters and finishers and a couple of carpenters and a fabric worker and some metal workers and welders, that kind of stuff. So I went from a university where we were learning about all this high-tech stuff about what was yet to come in the aerospace industry. It was a bit of the opposite end to go to, a lot of heavy mechanical engineering that was quite simple.

And then after that, we had a bit of excitement because six weeks after I started, in my very first job, the Iraqis invaded Kuwait.  I didn’t go off to war, thank goodness, but some of my people did. We all got ready for that: a bit of excitement.

Photo by Jacek Dylag on Unsplash

After that, I did a couple of years on a squadron, on the front line. We were maintaining and fixing the aeroplanes and looking after operations. And then from there, I went for a complete change. Actually, I did three years on a software maintenance team and that was a very different job, which I’ll talk about later. I had the choice of two unpleasant postings that I really did not want, or I could go to the software maintenance team.

Into Software by accident as well!

I discovered a burning passion to do software to avoid going to these other places. And that’s how I ended up there. I had three, fantastic years there and really enjoyed that. Then, I was thinking of going somewhere down south to be in the UK, to be near family, but we went further north. That’s the way things happen in the military.

I got taken on as the rather grandly titled Systems and Software Specialist Officer on the Typhoon Field Team. The Eurofighter Typhoon wasn’t in service at that point. (That didn’t come in until 2003 when I was in my last Air Force job, actually.)  We had a big team of handpicked people who were there to try and make sure that the aircraft was supportable when it came into service.

One of the big things about the new aircraft was it had tons of software on board.  There were five million lines of code on board, which was a lot at the time, and a vast amount of data. It was a data hog; it ate vast amounts of data and it produced vast amounts of data and that all needed to be managed. It was on a scale beyond anything we’d seen before. So it was a big shock to the Air Force.

More Full-time Study

Photo by Mike from Pexels

Then after that, I was very fortunate.  (This is a picture of York, with the minister in the background.) I spent a year full-time doing the safety-critical systems engineering course at York, which was excellent.  It was a privilege to be able to have a year to do that full-time. I’ve watched a lot of people study part-time when they’ve got a job and a family, and it’s really tough. So I was very, very pleased that I got to do that.

After that, I went to do another software job where this time we were in a small team and we were trying to drive software supportability into new projects coming into service, all kinds of stuff, mainly aircraft, but also other things as well.  That was almost like an internal consultancy job. The only difference was we were free, which you would think would make it easier to sell our services. But the opposite is the case.

Finally, in my last Air Force job, I was part of the engineering authority looking after the Typhoon aircraft as it came into service, which is always a fun time. We just got the plane into service. And then one of the boxes that I was responsible for malfunctioned. So the undercarriage refused to come down on the plane, which is not what you want. We did it did get down safely in the end, but then the whole fleet was grounded and we had to fix the problem. So some more excitement there. Not always of the kind that you want, but there we go. So that took me up to 2006.

At that point, I transitioned out of the Air Force and I became a consultant

So, I always regarded consultants with a bit of suspicion up until then, and now I am one. I started off with a firm called QinetiQ, which is also over here. And I was doing safety mainly with the aviation team. But again, we did all sorts, vehicles, ships, network logistics stuff, all kinds of things. And then in 2012, I joined Frazer-Nash in order to come to Australia.

So we appeared in Australia in November 2012. And we’ve been here in Adelaide all almost all that time. And you can’t get rid of us now because we’re citizens. So you’re stuck with us. But it’s been lovely. We love Adelaide and really enjoy, again, the varied work here.

Adelaide CBD, photo by Simon Di Nucci

Part 2 will follow next week!

New to System Safety? Then start here. There’s more about The Safety Artisan here. Subscribe for free regular emails here.

Categories
Blog Functional Safety

Functional Safety

The following is a short, but excellent, introduction to the topic of ‘Functional Safety’ by the United Kingdom Health and Safety Executive (UK HSE). It is equally applicable outside the UK, and the British Standards (‘BS EN’) are versions of international ISO/IEC standards – e.g. the Australian version (‘AS/NZS’) is often identical to the British standard.

My comments and explanations are shown [thus].

[Functional Safety]

“Functional safety is the part of the overall safety of plant and equipment that depends on the correct functioning of safety-related systems and other risk reduction measures such as safety instrumented systems (SIS), alarm systems and basic process control systems (BPCS).

[Functional Safety is popular, in fact almost ubiquitous, in the process industry, where large amounts of flammable liquids and gasses are handled. That said, the systems and techniques developed by and for the process industry have been so successful that they are found in many other industrial, transport and defence applications.]

SIS [Safety Instrumented Systems]

SIS are instrumented systems that provide a significant level of risk reduction against accident hazards.  They typically consist of sensors and logic functions that detect a dangerous condition and final elements, such as valves, that are manipulated to achieve a safe state.

The general benchmark of good practice is BS EN 61508, Functional safety of electrical/electronic/programmable electronic safety related systems. BS EN 61508 has been used as the basis for application-specific standards such as:

  • BS EN 61511: process industry
  • BS EN 62061: machinery
  • BS EN 61513: nuclear power plants

BS EN 61511, Functional safety – Safety instrumented systems for the process industry sector, is the benchmark standard for the management of functional safety in the process industries. It defines the safety lifecycle and describes how functional safety should be managed throughout that lifecycle. It sets out many engineering and management requirements, however, the key principles of the safety lifecycle are to:

  • use hazard and risk assessment to identify requirements for risk reduction
  • allocate risk reduction to SIS or to other risk reduction measures (including instrumented systems providing safety functions of low / undefined safety integrity)
  • specify the required function, integrity and other requirements of the SIS
  • design and implement the SIS to satisfy the safety requirements specification
  • install, commission and validate the SIS
  • operate, maintain and periodically proof-test the SIS
  • manage modifications to the SIS
  • decommission the SIS

BS EN 61511 also defines requirements for management processes (plan, assess, verify, monitor and audit) and for the competence of people and organisations engaged in functional safety.  An important management process is Functional Safety Assessment (FSA) which is used to make a judgement as to the functional safety and safety integrity achieved by the safety instrumented system.

Alarm Systems

Alarm systems are instrumented systems designed to notify an operator that a process is moving out of its normal operating envelope to allow them to take corrective action.  Where these systems reduce the risk of accidents, they need to be designed to good practice requirements considering both the E,C&I design and human factors issues to ensure they provide the necessary risk reduction.

In certain limited cases, alarm systems may provide significant accident risk reduction, where they also might be considered as a SIS. The general benchmark of good practice for management of alarm systems is BS EN 62682.

BPCS [Basic Process Control Systems]

BPCS are instrumented systems that provide the normal, everyday control of the process.  They typically consist of field instrumentation such as sensors and control elements like valves which are connected to a control system, interfaced, and could be operated by a plant operator.  A control system may consist of simple electronic devices like relays or complicated programmable systems like DCS (Distributed Control System) or PLCs (Programmable Logic Controllers).

BPCS are normally designed for flexible and complex operation and to maximize production rather than to prevent accidents.  However, it is often their failure that can lead to accidents, and therefore they should be designed to good practice requirements. The general benchmark of good practice for instrumentation in process control systems is BS 6739.”

[To be honest, I would have put this the other way around. The BCPS came first, although they were just called ‘control systems’, and some had alarms to get the operators’ attention. As the complexity of these control systems increased, then cascading alarms became a problem and alarms had to be managed as a ‘thing’. Finally, the process industry used additional systems, when the control system/alarm system combo became inadequate, and thus the terms SIS and BCPS were born.]

[It’s worth noting that for very rapid processes where a human either cannot intervene fast enough or lacks the data to do so reliably, the SIS becomes an automatic protection system, as found in rail signaling systems, or ‘autonomous’ vehicles. Also for domains where there is no ‘fail-safe’ state, for example in aircraft flight control systems, the tendency has been to engineer multiple, redundant, high-integrity control systems, rather than use a BCPS/SIS combo.]

Copyright

The above text is reproduced under Creative Commons Licence from the UK HSE’s webpage. The Safety Artisan complies with such licensing conditions in full.

[Functional Safety – END]

Back to Home Page

Categories
Mil-Std-882E Safety Analysis System Safety

How to Understand Safety Standards

Learn How to Understand Safety Standards with this FREE session from The Safety Artisan.

In this module, Understanding Your Standard, we’re going to ask the question: Am I Doing the Right Thing, and am I Doing it Right? Standards are commonly used for many reasons. We need to understand our chosen system safety engineering standard, in order to know: the concepts, upon which it is based; what it was designed to do, why and for whom; which kinds of risk it addresses; what kinds of evidence it produces; and it’s advantages and disadvantages.

Understand Safety Standards : You’ll Learn to

  • List the hazard analysis tasks that make up a program; and
  • Describe the key attributes of Mil-Std-882E. 
Understanding Your Standard

Topics:  Understand Safety Standards

Aim: Am I Doing the Right Thing, and am I Doing it Right?

  • Standards: What and Why?
  • System Safety Engineering pedigree;
  • Advantages – systematic, comprehensive, etc:
  • Disadvantages – cost/schedule, complexity & quantity not quality.

Transcript: Understand Safety Standards

Click here for the Transcript on Understanding Safety Standards

In Module Three, we’re going to understand our Standard. The standard is the thing that we’re going to use to achieve things – the tool. And that’s important because tools designed to do certain things usually perform well. But they don’t always perform well on other things. So we’re going to ask ‘Are we doing the right thing?’ And ‘Are we doing it right?’

What and Why?

So, what are we going to do, and why are we doing it? First of all, the use of standards in safety is very common for lots of reasons. It helps us to have confidence that what we’re doing is good enough. We’ve met a standard of performance in the absolute sense. It helps us to say, ‘We’ve achieved standardization or commonality in what we’re doing’. And we can also use it to help us achieve a compromise. That can be a compromise across different stakeholders or across different organizations. And standardization gives us some of the other benefits as well. If we’re all doing the same thing rather than we’re all doing different things, it makes it easier to train staff. This is one example of how a standard helps.

However, we need to understand this tool that we’re going to use. What it does, what it’s designed to do, and what it is not designed to do. That’s important for any standard or any tool. In safety, it’s particularly important because safety is in many respects intangible. This is because we’re always looking to prevent a future problem from occurring. In the present, it’s a little bit abstract. It’s a bit intangible. So, we need to make sure that in concept what we’re doing makes sense and is coherent. That it works together. If we look at those five bullet points there, we need to understand the concept of each standard. We need to understand the basis of each one.

And they’re not all based on the same concept. Thus some of them are contradictory or incompatible. We need to understand the design of the standard. What the standard does, what the aim of the standard is, why it came into existence. And who brought it into existence. To do what for who – who’s the ultimate customer here?

And for risk analysis standards, we need to understand what kind of risks it addresses. Because the way you treat a financial risk might be very different from a safety risk. In the world of finance, you might have a portfolio of products, like loans. These products might have some risks associated with them. One or two loans might go bad and you might lose money on those. But as long as the whole portfolio is making money that might be acceptable to you. You might say, ‘I’m not worried about that 10% of my loans have gone south and all gone wrong. I’m still making plenty of profit out of the other 90%’. It doesn’t work that way with safety. You can’t say ‘It’s OK that I’ve killed a few people over here because all this a lot over here are still alive!’. It doesn’t work like that!

Also, what kind of evidence does the standard produce? Because in safety, we are very often working in a legal framework that requires us to do certain things. It requires us to achieve a certain level of safety and prove that we have done so. So, we need certain kinds of evidence. In different jurisdictions and different industries, some evidence is acceptable. Some are not. You need to know which is for your area.

And then finally, let’s think about the pros and cons of the standard, what does it do well? And what does it do not so well?

System Safety Pedigree

We’re going to look at a standard called Military Standard 882E. Many decades ago, this standard developed was created by the US government and military to help them bring into service complex-cutting edge military equipment. Equipment that was always on the cutting edge. That pushed the limits of what you could achieve in performance.

That’s a lot of complexity. Lots of critical weapon systems, and so forth. And they needed something that could cope with all that complexity. It’s a system safety engineering standard. It’s used by engineers, but also by many other specialists. As I said, it’s got a background from military systems. These days you find these principles used pretty much everywhere. So, all the approaches to System Safety that 882 introduced are in other standards. They are also in other countries.

It addresses risks to people, equipment, and the environment, as we heard earlier. And because it’s an American standard, it’s about system safety. It’s very much about identifying requirements. What do we need to happen to get safety? To do that, it produces lots of requirements. It performs analyses in all those requirements and generates further requirements. And it produces requirements for test evidence. We then need to fulfill these requirements. It’s got several important advantages and disadvantages. We’re going to discuss these in the next few slides.

Comprehensive Analysis

Before we get to that, we need to look at the key feature of this standard. The strengths and weaknesses of this standard come from its comprehensive analysis. And the chart (see the slide) is meant to show how we are looking at the system from lots of different perspectives. (It’s not meant to be some arcane religious symbol!) So, we’re looking at a system from 10 different perspectives, in 10 different ways.

Going around clockwise, we’ve got these ten different hazard analysis tasks. First of all, we start off with preliminary hazard identification. Then preliminary hazard analysis. We do some system requirements hazard analysis. So, we identify the safety requirements that the system is going to meet so that we are safe. We look at subsystem and system hazard analysis. At operating and support hazard analysis – people working with the system. Number seven, we look at health hazard analysis – Can the system cause health problems for people? Functional hazard analysis, which is all about what it does. We’re thinking of sort of source software and data-driven functionality. Maybe there’s no physical system, but it does stuff. It delivers benefits or risks. System of systems hazard analysis – we could have lots of different and/or complex systems interacting. And then finally, the tenth one – environmental hazard analysis.

If we use all these perspectives to examine the system, we get a comprehensive analysis of the system. From this analysis, we should be confident that we have identified everything we need to. All the hazards and all the safety requirements that we need to identify. Then we can confidently deliver an appropriate safe system. We can do this even if the system is extremely complex. The standard is designed to deal with big, complex cutting-edge systems.

Advantages #1

In fact, as we move on to advantages, that’s the number one advantage of this standard. If we use it and we use all 10 of those tasks, we can cope with the largest and the most demanding programs. I spent much of my career working on the Eurofighter Typhoon. It was a multi-billion-dollar program. It cost hundreds of billions of dollars, four different nations worked together on it. We used a derivative of Mil. Standard 882 to look at safety and analyze it. And it coped. It was powerful enough to deal with that gigantic program. I spent 13 years of my life on and off on that program so I’d like to think that I know my stuff when we’re talking about this.

As we’ve already said, it’s a systematic approach to safety. Systems, safety, engineering. And we can start very early. We can start with early requirements – discovery. We don’t even need a design – we know that we have a need. So we can think about those needs and analyze them.

And it can cover us right through until final disposal. And it covers all kinds of elements that you might find in a system. Remember our definition of ‘system’? It’s something that consists of hardware, software, data, human beings, etc. The standard can cope with all the elements of a system. In fact, it’s designed into the standard. It was specifically designed to look at all those different elements. Then to get different insights from those elements. It’s designed to get that comprehensive coverage. It’s really good at what it does. And it involves, not just engineers, but people from all kinds of other disciplines. Including operators, maintainers, etc, etc.

I came from a maintenance background. I was either directly or indirectly supporting operators. I was responsible for trying to help them get the best out of their system. Again, that’s a very familiar world to me. And rigorous standards like this can help us to think rigorously about what we’re doing. And so get results even in the presence of great complexity, which is not always a given, I must say.

So, we can be confident by applying the standard. We know that we’re going to get a comprehensive and thorough analysis. This assures us that what we’re doing is good.

Advantages #2

So, there’s another set of advantages. I’ve already mentioned that we get assurance. Assurance is ‘justified confidence’. So we can have high confidence that all reasonably foreseeable hazards will be identified and analyzed. And if you’re in a legal jurisdiction where you are required to hit a target, this is going to help you hit that target.

The standard was also designed for use in contracts. It’s designed to be applied to big programs. We’d define that as where we are doing the development of complex high-performance systems. So, there are a lot of risks. It’s designed to cope with those risks.

Finally, the standard also includes requirements for contracting, for interfaces with other systems, for interfaces with systems engineering. This is very important for a variety of disciplines. It’s important for other engineering and technical disciplines. It’s important for non-technical disciplines and for analysis and recordkeeping. Again, all these things are important, whether it is for legal reasons or not. We need to do recordkeeping. We need to liaise with other people and consult with them. There are legal requirements for that in many countries. This standard is going to help us do all those things.

But, of course, in a standard everything has pros and cons and Mil. Standard 882 is no exception. So, let’s look at some of the disadvantages.

Disadvantages #1

First of all, a full system safety program might be overkill for the system that you want to use, or that you want to analyze.  The Cold War, thank goodness, is over; generally speaking, we’re not in the business of developing cutting-edge high-performance killing machines that cost billions and billions of dollars and are very, very risky. These days, we tend to reduce program risk and cost by using off-the-shelf stuff and modifying it. Whether that be for military systems, infrastructure in the chemical industry, transportation, whatever it might be. Very much these days we have a family of products and we reuse them in different ways. We mix and match to get the results that we want.

And of course, all this comprehensive analysis is not cheap and it’s not quick. It may be that you’ve got a program that is schedule-constrained. Or you want to constrain the cost and you cannot afford the time and money to throw a full 882 program at it. So, that’s a disadvantage.

The second family of problems is that these kinds of safety standards have often been applied prescriptively. The customer would often say, ‘Go away and go and do this. I’m going to tell you what to do based on what I think reduces my risk’. Or at least it covers their backside. So, contractors got used to being told to do certain things by purchasers and customers. The customers didn’t understand the standards that they were applying and insisting upon. So, the customers did not understand how to tailor a safety standard to get the result that they wanted. So they asked for dumb things or things that didn’t add value. And the contractors got used to working in that kind of environment. They got used to being told what to do and doing it because they wouldn’t get paid if they didn’t. So, you can’t really blame them.

But that’s not great, OK? That can result in poor behaviors. You can waste a lot of time and money doing stuff that doesn’t actually add value. And everybody recognizes that it doesn’t add value. So you end up bringing the whole safety program into disrepute and people treat it cynically. They treat it as a box-ticking exercise. They don’t apply creativity and imagination to it. Much less determination and persistence. And that’s what you need for a good effective system safety program. You need creativity. You need imagination. You need people to be persistent and dedicated to doing a good job. You need that rigor so that you can have the confidence that you’re doing a good job because it’s intangible.

Disadvantages #2

Let’s move onto the second kind of family of disadvantages. And this is the one that I’ve seen the most, actually, in the real world. If you do all 10 tasks and even if you don’t do all 10, you can create too many hazards. If you recall the graphic from earlier, we have 10 tasks. Each task looks at the system from a different angle. What you can get is lots and lots of duplication in hazard identification. You can have essentially the same hazards identified over and over again in each task. And there’s a problem with that, in two ways.

First of all, quality suffers. We end up with a fragmented picture of hazards. We end up with lots and lots of hazards in the hazard log, but not only that. We get fragments of hazards rather than the real thing. Remember I said those tests for what a hazard really is? Very often you can get causes masquerading as hazards. Or other things that that exacerbating factors that make things worse. They’re not a hazard in their own right, but they get recorded as hazards. And that problem results in people being unable to see the big picture of risk. So that undermines what we’re trying to do. And as I say, we get lots of things misidentified and thrown into the pot. This also distracts people. You end up putting effort into managing things that don’t make a difference to safety. They don’t need to be managed. Those are the quality problems.

And then there are quantity problems. And from personal experience, having too many hazards is a problem in itself.  I’ve worked on large programs where we were managing 250 hazards or thereabouts. That is challenging even with a sizable, dedicated team. That is a lot of work in trying to manage that number of hazards effectively. And there’s always the danger that it will slide into becoming a box-ticking exercise. Superficial at best.

I’ve also seen projects that have two and a half thousand hazards or even 4000 hazards in the hazard log. Now, once you get up to that level, that is completely unmanageable. People who have thousands of hazards in a hazard log and they think they’re managing safety are kidding themselves. They don’t understand what safety is if they think that’s going to work. So, you end up with all these items in your hazard log, which become a massive administrative burden. So people end up taking shortcuts and the real hazards are lost. The real issues that you want to focus on are lost in the sea of detail that nobody will ever understand. You won’t be able to control them.

Unfortunately, Mil. Standard 882 is good at generating these grotesque numbers of hazards. If you don’t know how to use the standard and don’t actively manage this issue, it gets to this stage. It can go and does go, badly wrong. This is particularly true on very big programs. And you really need clarity on big projects.

Summary of Module

Let’s summarize what we’ve done with this module. The aim was to help us understand whether we’re doing the right thing and whether we’ve done it right. And standards are terrific for helping us to do that. They help us to ensure we’re doing the right thing. That we’re looking at the right things. And they help us to ensure that we’re doing it rigorously and repeatedly. All the good quality things that we want. And Mil. Standard 882E that we’re looking at is a system safety engineering standard. So it’s designed to deal with complexity and high-performance and high-risk. And it’s got a great pedigree. It’s been around for a long time.

Now that gives advantages. So, we have a system safety program with this standard that helps us to deal with complexity. That can cope with big programs, with lots of risks. That’s great.

The disadvantages of this standard are that if we don’t know how to tailor or manage it properly, it can cost a lot of money. It can take a lot of time to give results which can cause problems for the program. And ultimately, you can accidentally ignore safety if you don’t deliver on time. And it can generate complexity. And it can generate a quantity of data that is so great that it actually undermines the quality of the data. It undermines what we’re trying to achieve. In that, we get a fragmented picture in which we can’t see the true risks. And so we can’t manage them effectively. If we get it wrong with this standard, we can get it really wrong. And that brings us to the end of this module.

This is Module 3 of SSRAP

This is Module 3 from the System Safety Risk Assessment Program (SSRAP) Course. Risk Analysis Programs – Design a System Safety Program for any system in any application. You can access the full course here.

You can find more introductory lessons at Start Here.

Categories
Behind the Scenes

Why Call it The Safety Artisan?

Why did I call my business The Safety Artisan?

artisan/ˈɑːtɪzan,ɑːtɪˈzan/Learn to pronounce noun

A worker in a skilled trade, especially one that involves making things by hand. “street markets where local artisans display handwoven textiles, painted ceramics, and leather goods”

Why Call it The Safety ‘Artisan’?

Why The Safety ‘Artisan’?

Hi, everyone. When I was choosing a name for my business, I thought of quite a lot of alternatives, but I settled on The Safety Artisan for three reasons. First, I liked the meaning of the word, the idea of an individual person pursuing their craft and trying to do it to the very best of their abilities.

Second, I liked the application because I’ve worked on a lot of very large, even multi-billion-dollar projects; but we’re still knowledge workers. We’re still individuals who have to be competent at what we do in order to deliver a safe result for people.

And third, I liked the idea, the image of the cottage industry, the artisan working at home as I am now, and delivering goods and services that other people can use wherever they are. And indeed, you might be home or you might be on your mobile phone listening to this.

So I liked all three of those things. I thought, yes, that’s what I’m about. That’s what I believe in and want to do. And if that sounds good to you, too, then please check out The Safety Artisan, where I provide #safety #engineering #training.

Meet the Author

Learn safety engineering with me, an industry professional with 25 years of experience, I have:

•Worked on aircraft, ships, submarines, ATMS, trains, and software;

•Tiny programs to some of the biggest (Eurofighter, Future Submarine);

•In the UK and Australia, on US and European programs;

•Taught safety to hundreds of people in the classroom, and thousands online;

•Presented on safety topics at several international conferences.

Learn more about me here.