Categories
Blog

The 2022 Digest

This is The 2022 Digest – all the posts from The Safety Artisan last year. There have been 31 posts in all covering subjects such as:

  • Risk and Safety basics;
  • Tools and Techniques;
  • A short series on Safety Management (to be continued);
  • Design Safety;
  • SFARP and Australian WHS;
  • Hazard Logs (also to be continued);
  • Launching my Thinkific page;
  • Cyber security;
  • A series on Software Safety and Standards; and
  • Updates of posts on System Safety Analyses.

Here we go…

The 2022 Digest: Quarter Four

In this 45-minute session, I’m looking at System Requirements Hazard Analysis, or SRHA, which is Task 203 in the Mil-Std-882E standard. I will explore Task 203’s aim, description, scope, and contracting requirements.  SRHA is an important and complex task, which needs to be done on several levels to be successful.  This video explains the issues … Read more

In this 45-minute session, The Safety Artisan looks at how to do Preliminary Hazard Analysis, or PHA, which is Task 202 in Mil-Std-882E. We explore Task 202’s aim, description, scope, and contracting requirements. We also provide value-adding commentary and explain the issues with PHA – how to do it well and avoid the pitfalls. Topics: … Read more

In this full-length (40-minute) session, The Safety Artisan looks at Functional Hazard Analysis, or FHA, which is Task 208 in Mil-Std-882E. FHA analyses software, complex electronic hardware, and human interactions. We explore the aim, description, and contracting requirements of this Task, and provide extensive commentary on it. (We refer to other lessons for special techniques … Read more

Are you looking for Safety Engineering Jobs in Australia?  Thinking of moving into the profession and wondering if it’s worth it?  Already a safety engineer and thinking of moving to Australia (Poms, take note)?  Then this article is for you! Introduction The most popular online job site in Australia is seek.com.au. If we go on … Read more

SW Safety Principles Conclusions and References is the sixth and final blog post on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines … Read more

This post, Software Safety Assurance and Standards, is the fifth in a series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can … Read more

Software Safety Assurance is the fourth in a new series of six blog posts on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these … Read more

Software Safety Principle 4 is the third in a new series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of … Read more

The 2022 Digest: Quarter Three

Software Safety Principles 2 and 3 is the second in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think … Read more

This is the first in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards. We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as … Read more

Proportionality is about committing resources to the Safety Program that are adequate – in both quality and quantity – for the required tasks. Proportionality is a concept that should be applied to determine the allocation of resource and effort to a safety and environmental argument based on its risk.  It is a difficult concept … Read more

This post, Blog: Australian vs. UK Safety Law compares the two approaches, based on my long experience of working on both sides. Are you a safety professional thinking of emigrating from the UK to Australia?  Well, I’ve done it, and here’s my BREXIT special guide!  In this 45-minute video, The Safety Artisan looks at the … Read more

In this course, ‘CISSP 2021: What’s New?’, we look at the significant changes that have been made to the CISSP Official Exam Outline (the course syllabus). Learn what’s new in the CISSP Curriculum, from May 1st, 2021 (next update in 2024) There are still Eight Domains – D1, D3 & D7 are … Read more

In this 45-minute video, I discuss System Safety Principles, as set out by the US Federal Aviation Authority in their System Safety Handbook. Although this was published in 2000, the principles still hold good (mostly) and are worth discussing. I comment on those topics where the modern practice has moved on, and those jurisdictions where … Read more

In this 33-minute session, Safety Concepts Part 2, The Safety Artisan equips you with more Safety Concepts. We look at the basic concepts of safety, risk, and hazard in order to understand how to assess and manage them. Exploring these fundamental topics provides the foundations for all other safety topics, but it doesn’t have to … Read more

In Hazard Logs – a Brief Summary, we will give you an overview of this important safety management tool. This post serves as an introduction to longer posts and videos (e.g. Hazard Logs & Hazard Tracking Systems), which will provide you with much more content. Hazard Logs – a Brief Summary Description of Hazard Log … Read more

In this Australian WHS Course, we show you how to practically and pragmatically implement the essential elements of Australian Work Health and Safety Legislation. In particular, we look at the so-called ‘upstream’ WHS duties. These are the elements you need to safely introduce systems and services into the Australian market. Lessons in This Course A Guide … Read more

The 2022 Digest: Quarter Two

In this lesson, I will teach you how to demonstrate SFARP. To use the proper terminology, from the Australian WHS Act, how to eliminate or minimize risks so far as is reasonably practicable. (The Act never uses the acronym SFARP or SFAIRP, but everyone else does.) This will build upon the post So Far As … Read more

Career change: in my lecture to the System Engineering Industry Program at the University of Adelaide, I reflect on my career changes. What can you learn from my experiences? (Hint: a lot, I hope!) I want to talk about career changes because all of you – everyone listening – have already started to make them. … Read more

In this post on Safety Management Policy, we’re going to look at the policy requirements of a typical project management safety standard. This is the Acquisition Safety & Environmental System (ASEMS). The Ministry of Defence is the biggest acquirer of manufactured goods in the UK, and it uses ASEMS to guide hundreds of acquisition projects. … Read more

Good work design can help us achieve safe outcomes by designing safety into work processes and the design of products. Adding safety as an afterthought is almost always less effective and costs more over the lifecycle of the process or product. Introduction The Australian Work Health and Safety Strategy 2012-2022 is underpinned by the principle … Read more

Safety Planning: if you fail to plan, you are planning to fail. In my experience, good safety plans don’t always result in successful safety programs; however, bad safety plans never lead to success. Safety Planning: Introduction Definitions A Safety Management Plan is defined as: “A document that defines the strategy for addressing safety and documents the Safety Management … Read more

Our Second Safety Management Procedure is the Project Safety Committee. Okay, so committees are not the sexiest subject, but we need to get stakeholders together to make things happen! Project Safety Committee: Introduction Definitions A Safety Committee is defined as: A group of stakeholders that exercises, oversees, reviews and endorses safety management and safety engineering activities. Def … Read more

In ‘Project Safety Initiation’ we look at what you need to do to get your safety project or program started. Introduction Definitions A stakeholder is anyone who will be affected by the introduction of the system and who needs to be consulted or informed about the development and fielding of the system, and anyone who contributes to … Read more

The 2022 Digest: Quarter One

‘So Far As Is Reasonably Practicable’ is a phrase that gets used a lot, but what does it mean? How do you demonstrate it? Well, in Australia we do it like this … and you can learn from this wherever you operate! Attribution This post uses text from ‘How to Determine what is Reasonably Practicable … Read more

In Safety Assessment Techniques Overview we will look at how different analysis techniques can be woven together. How does one analysis feed into another? What do we need to get sufficient coverage to be confident that we’ve done enough? Learning Objectives: Safety Assessment Techniques Overview You will be able to: List and ‘sequence’ the five … Read more

TL;DR This article on Failure Mode Effects Analysis explains this powerful and commonly-used family of techniques. It covers: A description of the technique, including its purpose; When it might be used; Advantages, disadvantages and limitations; Sources of additional information; A simple example of an FMEA/FMECA; and Additional comments. I’ve added some ‘top tips’ of my … Read more

I’m pleased to tell you that The Safety Artisan is on Thinkific! Thinkific is a powerful and beautifully-presented online Learning Management System.  This will complement the existing Safety Artisan website.   My first course will be ‘System Safety Assessment‘ with ten hours of instructional videos. The new course is here. (Please note that this is the same … Read more

What is System Safety Engineering? System Safety Engineering does five things: Deals with the whole system, including software, data, people, and environment; Uses a systematic (rigorous) process; Concentrates on requirements (to cope with complexity); Considers safety early in the system life cycle; and Handles complexity cost-effectively and efficiently. System Safety Engineering: Transcript What is system … Read more

In this article, I look at The Risk Matrix, a widely used technique in many industries. Risk Matrices have many applications! In this article, I have used material from a UK Ministry of Defence guide, reproduced under the terms of the UK’s Open Government Licence. Introduction A risk matrix is a graphical representation of the … Read more

You heard me right. Risk: Averse, Adverse, or Appetite? Which would you choose? Do we even have a choice? Read on … We often hear that we live in a risk-averse society.  By that, I mean that we don’t want to take risks, or that we’re too timid.  I don’t think that’s the whole story. … Read more

Thanks for Your Support in 2022!

Creating The 2022 Digest has reminded me just how much content I have produced this year. If you would like to get content emailed to you every two weeks, plus big discounts on courses then subscribe here!

Categories
Blog Mil-Std-882E Safety Analysis

How to do Preliminary Hazard Analysis

In this 45-minute session, The Safety Artisan looks at how to do Preliminary Hazard Analysis, or PHA, which is Task 202 in Mil-Std-882E. We explore Task 202’s aim, description, scope, and contracting requirements. We also provide value-adding commentary and explain the issues with PHA – how to do it well and avoid the pitfalls.

This is the seven-minute-long demo video. The full video is 45 minutes long.

Topics: How to do Preliminary Hazard Analysis

  • Task 202 Purpose;
  • Task Description;
  • Recording & Scope;
  • Risk Assessment (Tables I, II & III);
  • Risk Mitigation (order of preference);
  • Contracting; and
  • Commentary.

Transcript: How to do Preliminary Hazard Analysis

Transcript: Preliminary Hazard Analysis

Hello and welcome to the Safety Artisan, where you’ll find professional, pragmatic and impartial safety training resources. So, we’ll get straight on to our session and it is the 8th February 2020. 

Preliminary Hazard Analysis

Now we’re going to talk today about Preliminary Hazard Analysis (PHA). This is Task 202 in Military Standard 882E, which is a system safety engineering standard. It’s very widely used mostly on military equipment, but it does turn up elsewhere.  This standard is of wide interest to people and Task 202 is the second of the analysis tasks. It’s one of the first things that you will do on a systems safety program and therefore one of the most informative. This session forms part of a series of lessons that I’m doing on Mil-Std-882E.

Topics for This Session

What are we going to cover in this session? Quite a lot! The purpose of the task, a task description, recording and scope. How we do risk assessments against Tables 1, 2 and 3. Basically, it is severity, likelihood and the overall risk matrix.  We will talk about all three, about risk mitigation and using the order of preference for risk mitigation, a little bit of contracting and then a short commentary from myself. In fact, I’m providing commentary all the way through. So, let’s crack on.

Task 202 Purpose

The purpose of Task 202, as it says, is to perform and document a preliminary hazard analysis, or PHA for short, to identify hazards, assess the initial risks and identify potential mitigation measures. We’re going to talk about all of that.

Task Description

First, the task description is quite long here. And as you can see, I’ve highlighted some stuff that I particularly want to talk about.

It says “the contractor” [does this or that], but it doesn’t really matter who is doing the analysis, and actually, the customer needs to do some to inform themselves, otherwise they won’t really understand what they’re doing.  Whoever does it needs to perform and document PHA. It’s about determining initial risk assessments. There’s going to be more work, more detailed work done later. But for now, we’re doing an initial risk assessment of identified hazards. And those hazards will be associated with the design or the functions that we’re proposing to introduce. That’s very important. We don’t need a design to do this. We can get in early when we have user requirements, functional requirements, that kind of thing.

Doing this work will help us make better requirements for the system. So, we need to evaluate those hazards for severity and probability. It says based on the best available data. And of course, early in a program, that’s another big issue. We’ll talk about that more later. It says including mishap data as well, if accessible: American term mishap, it means accident, but we’re avoiding any kind of suggestion about whether it is accidental or deliberate.  It might be stupidity, deliberate, whatever. It’s a mishap. It’s an undesirable event. We look for accessible data from similar systems, legacy systems and other lessons learned. I’ve talked about that a little bit in Task 201 lesson about that, and there’s more on that today under commentary. We need to look at provisions, alternatives, meaning design provisions and design alternatives in order to reduce risks and adding mitigation measures to eliminate hazards. If we can all reduce associated risk, we need to include all of that. What’s the task description? That’s a good overview of the task and what we need to talk about.

Reading & Scope

First, recording and scope, as always, with these tasks, we’ve got to document the results of the PHA in a hazard tracking system. Now, a word on terminology; we might call hazard tracking system; we might call it hazard log; we might call it a risk register. It doesn’t really matter what it’s called. The key point is it’s a tracking system. It’s a live document, as people say, it’s a spreadsheet or a database, something like that. It’s something relatively easy to update and change. And, we can track changes through the safety program once we do more analysis because things will change. We should expect to get some results and to refine them and change them as time goes on. Very important point.

That’s it for the Demo…

End: How to do Preliminary Hazard Analysis

You can find a free pdf of the System Safety Engineering Standard, Mil-Std-882E, here.

Categories
Blog software safety

SW Safety Principles Conclusions and References

SW Safety Principles Conclusions and References is the sixth and final blog post on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Conclusion

These six blog posts have presented the 4+1 model of foundational principles of software safety assurance. The principles strongly connect to elements of current software safety assurance standards and they act as a common benchmark against which standards can be measured.

Through the examples provided, it’s also clear that, although these concepts can be stated clearly, they haven’t always been put into practice. There may still be difficulties with their application by current standards. Particularly, there is still a great deal of research and discussion going on about the management of confidence with respect to software safety assurance (Principle 4+1).

[My own, informal, observations agree with this last point. Some standards apply Principle 4+1 more rigorously, but as a result, they are more expensive. As a result, they are less popular and less used.]

Standards and References

[1] RTCA/EUROCAE, Software Considerations in Airborne Systems and Equipment Certification, DO-178C/ED-12C, 2011.

[2] CENELEC, EN-50128:2011 – Railway applications – Communication, signaling and processing systems – Software for railway control and protection systems, 2011.

[3] ISO-26262 Road vehicles – Functional safety, FDIS, International Organization for Standardization (ISO), 2011

[4] IEC-61508 – Functional Safety of Electrical / Electronic / Programmable Electronic Safety-Related Systems. International Electrotechnical Commission (IEC), 1998

[5] FDA, Examples of Reported Infusion Pump Problems, Accessed on 27 September 2012,

http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/GeneralHospitalDevicesandSupplies/InfusionPumps/ucm202496.htm

[6] FDA, FDA Issues Statement on Baxter’s Recall of Colleague Infusion Pumps, Accessed on 27 September 2012, http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm210664.htm

[7] FDA, Total Product Life Cycle: Infusion Pump – Premarket Notification 510(k) Submissions, Draft Guidance, April 23, 2010.

[8] “Report on the Accident to Airbus A320-211 Aircraft in Warsaw on 14 September 1993”, Main Commission Aircraft Accident Investigation Warsaw, March 1994, http://www.rvs.unibielefeld.de/publications/Incidents/DOCS/ComAndRep/Warsaw/warsaw-report.html  Accessed on 1st October 2012.

[9] JPL Special Review Board, “Report on the Loss of the Mars Polar Lander and Deep Space 2 Missions”, Jet Propulsion Laboratory”, March 2000.

[10] Australian Transport Safety Bureau. In-Flight Upset Event 240Km North-West of Perth, WA, Boeing Company 777-2000, 9M-MRG. Aviation Occurrence Report 200503722, 2007.

[11] H. Wolpe, General Accounting Office Report on Patriot Missile Software Problem, February 4, 1992, Accessed on 1st October 2012, Available at: http://www.fas.org/spp/starwars/gao/im92026.htm

[12] Y.C. Yeh, Triple-Triple Redundant 777 Primary Flight Computer, IEEE Aerospace Applications Conference pg 293-307, 1996.

[13] D.M. Hunns and N. Wainwright, Software-based protection for Sizewell B: the regulator’s perspective. Nuclear Engineering International, September 1991.

[14] R.D. Hawkins, T.P. Kelly, A Framework for Determining the Sufficiency of Software Safety Assurance. IET System Safety Conference, 2012.

[15] SAE. ARP 4754 – Guidelines for Development of Civil Aircraft and Systems. 1996.

Software Safety Principles: End of the Series

This blog post series was derived from ‘The Principles of Software Safety Assurance’, by RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Assurance and Standards

This post, Software Safety Assurance and Standards, is the fifth in a series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Relationship to Existing Software Safety Standards

The ideas of software safety assurance discussed in this article are not explicit in most software safety standards, though they are typically present. However, by concentrating only on adherence to the letter of these standards, software developers using these standards are likely to lose sight of the primary goals (e.g. through box-ticking). We look at manifestations of each of the Principles in some of the most popular software safety standards below – IEC 61508, ISO 26262, and DO 178C.

Principle 1

IEC 61508 and ISO 26262 both demonstrate how hazard analysis at the system level and software safety criteria have been linked. High-level requirements that address system requirements assigned to software to prevent system risks must be defined, according to DO-178C. Particularly when used in conjunction with companion standard ARP 4754, this addresses Principle 1.

[In military aviation, I’m used to seeing Do-178 used in conjunction with Mil-Std-882. This also links hazard analysis to software safety requirements, although perhaps not as thoroughly as ARP 4754.]

Principle 2

Traceability in software needs is always required. The standards also place a strong emphasis on the software requirements’ iterative validation.

Specific examples of requirements decomposition models are provided by DO-178C and ISO26262. Capturing the justification for the required traceability is an area where standards frequently fall short (a crucial aspect of Principle 2).

What is particularly lacking is a focus on upholding the purpose of the software safety rules. Richer types of traceability that take the requirements’ purpose into account rather than just syntactic ones at various phases of development are needed for this.

Principle 3

The basis of the software safety standards is guidance on requirement satisfaction. Although there are distinct disparities in the advised methods of pleasure, this principle is generally thoroughly addressed (for example DO-178 traditionally placed a strong emphasis on testing).

[Def Stan 00-55 places more emphasis on proof, not just testing. However, this onerous software safety standard has fallen out of fashion.]

Principle 4

This requires that the absence of mistakes introduced during the software lifetime be demonstrated. Aspects of this principle can be seen in the standards. However, of all the standards, the software hazard analysis part receives the least attention.

[N.B. The combination of Mil-Std-882E and the Joint Software Systems Safety Engineering Handbook places a lot of emphasis on this aspect.]

The standards imply that system-level safety analysis is a process. The purpose of software development is to prove that requirements, including safety requirements assigned to software, as produced by system-level procedures, are correct. At later phases of the development process, these criteria are refined and put into practice without explicitly applying software hazard analysis.

There is no specific requirement in DO 178C to identify “emerging” safety risks during software development, but it does permit recognized safety issues to be transmitted back to the system level.

Principle 4+1

All standards share the idea of modifying the software assurance strategy in accordance with “risk.” However, there are significant differences in how the software’s criticality is assessed. IEC 61508 establishes a Safety Integrity Level based on the probability delta in risk reduction, DO-178B emphasizes severity, and ISO 26262 adds the idea of the vehicle’s controllability. At various levels of criticality, the suggested strategies and processes vary greatly as well.

[ The Mil-Std-882E approach is to set a ‘level of rigor’ for software development. This uses a combination of mishap severity and the reliance placed on the software to set the level.]

Software Safety Assurance and Standards: End of Part 5 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Assurance

Software Safety Assurance is the fourth in a new series of six blog posts on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Software Assurance = Justified Confidence

[The original authors referred to Principle 4+1 as ‘confidence’, but this term is not well recognized, so I have used ‘assurance’. The two terms are related. Both terms get us to ask: how much safety is enough? This is also the topic addressed in my blog post on Proportionality.]

Principle 4+1:

The confidence established in addressing the software safety principles shall be commensurate to the contribution of the software to system risk.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

All safety-related software systems must adhere to the four aforementioned principles. To prove that each of the guiding principles has been established for the software, evidence must be presented.

Depending on the characteristics of the software system itself, the dangers that are present, and the principle that is being shown, the proof may take many different forms. The strength and quantity of the supporting evidence will determine how confidently or assuredly the premise is established.

Therefore, it’s crucial to confirm that the level of trust developed is always acceptable. This is frequently accomplished by making sure that the level of confidence attained corresponds to the contribution the software makes to system risk. This strategy makes sure that the areas that lower safety risk the most receive the majority of attention (when producing evidence).

This method is extensively used today. Many standards employ concepts like integrity or assurance levels to describe the amount of confidence needed in a certain software function.

Examples

The flight control system for the Boeing 777 airplane is a Fly-By-Wire (FBW) system … The Primary Flight Computer (PFC) is the central computation element of the FBW system. The triple modular redundancy (TMR) concept also applies to the PFC architectural design. Further, the N-version dissimilarity issue is integrated to the TMR concept.

Details are given of a ‘special case procedure’ within the principles’ framework which has been developed specifically to handle the particular problem of the assessment of software-based protection systems. The application of this ‘procedure’ to the Sizewell B Nuclear Power Station computer-based primary protection system is explained.

Suitability of Evidence

Once the essential level of confidence has been established, it is crucial to be able to judge whether it has been reached. Several factors must be taken into account when determining the degree of confidence with which each principle is put into practice.

The suitability of the evidence should be taken into consideration first. The constraints of the type of evidence being used must be considered too. These restrictions will have an impact on the degree of confidence that can be placed in each sort of evidence with regard to a certain principle.

Examples of these restrictions include the degree of test coverage that can be achieved, the precision of the models employed in formal analysis approaches, or the subjectivity of review and inspection. Most techniques have limits on what they can achieve.

Due to these limitations, it could be necessary to combine diverse types of evidence to reach the required degree of confidence in any one of the principles. The reliability of each piece of evidence must also be taken into account. This takes into account the degree of confidence in the item of evidence’s capacity to perform as expected.

This is also frequently referred to as evidence rigor or evidence integrity. The rigorousness of the technique employed to produce the evidence item determines its reliability. The primary variables that will impact trustworthiness are Tools, Personnel, Methodology, Level of Audit and Review, and Independence.

The four software safety principles will never change. However, there is a wide range of trust in how those principles are developed. We now know that a determination must be made regarding the degree of assurance required for any given system’s principles to be established. We now have our guiding principle.

Since it affects how the previous four principles are put into practice, this concept is also known as Principle 4+1.

Software Safety Assurance: End of Part 4 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Principle 4

Software Safety Principle 4 is the third in a new series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Principle 4: Hazardous Software Behaviour

The fourth software safety principle is:

Principle 4: Hazardous behaviour of the software shall be identified and mitigated.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

Software safety requirements imposed on a software design can capture the high-level safety requirements’ intent. However, this does not ensure that all of the software’s potentially dangerous behaviors have been considered. Because of how the software has been created and built, there will frequently be unanticipated behaviors that cannot be understood through a straightforward requirements decomposition. These risky software behaviors could be caused by one of the following:

  1. Unintended interactions and behaviors brought on by software design choices; or
  2. Systematic mistakes made when developing software.

On 1 August 2005, a Boeing Company 777-200 aircraft, registered 9M-MRG, was being operated on a scheduled international passenger service from Perth to Kuala Lumpur, Malaysia. The crew experienced several frightening and contradictory cockpit indications.

This incident illustrates the issues that can result from unintended consequences of software design. Such incidents could only be foreseen through a methodical and detailed analysis of potential software failure mechanisms and their repercussions (both on the program and external systems). Putting safeguards in place to address potential harmful software behavior is possible if it has been found. However, doing so requires us to examine the potential impact of software design decisions.

Not all dangerous software behavior will develop as a result of unintended consequences of the software design. As a direct result of flaws made during the software design and implementation phases, dangerous behavior may also be seen. Seemingly minor development mistakes can have serious repercussions.

It’s important to stress that this is not a problem with software quality in general. We exclusively focus on faults that potentially result in dangerous behavior for the purposes of software safety assurance. As a result, efforts can be concentrated on lowering systematic errors in areas where they might have an impact on safety.

Since systematically establishing direct hazard causality for every error may not be possible in practice, it may be preferable for a while to accept what is regarded as best practice. However, the justification for doing so ought to at the very least be founded on knowledge from the software safety community on how the particular problem under consideration has led to safety-related accidents. 

To guarantee that adequate rigor is applied to their development, it is also crucial to identify the most crucial components of the software design. Any software behavior that may be risky must be recognized and stopped if there we are to be confident that the software will always behave safely.

Software Safety Principle 4: End of Part 3 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Principles 2 and 3

Software Safety Principles 2 and 3 is the second in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Principle 2: Requirement Decomposition

The second software safety principle is:

Principle 2: The intent of the software safety requirements shall be maintained throughout requirements decomposition.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

The requirements and design are gradually broken down as the software development lifecycle moves forwards, leading to the creation of a more intricate software design. The term “derived software requirements” refers to the criteria that were derived for the more intricate software design. The intent of those criteria must be upheld as the software safety requirements are broken down once they have been established as comprehensive and accurate at the highest (most abstract) level of design.

An example of the failure of requirements decomposition is the crash of Lufthansa Flight 2904 at Warsaw on 14 September 1993.

In essence, the issue is one of ongoing requirements validation. How do we show that the requirements expressed at one level of design abstraction are equal to those defined at a more abstract level? This difficulty arises constantly during the software development process.

It is insufficient to only consider requirements fulfillment. The software safety requirements had been met in the Flight 2904 example. However, they did not match the intent of the high-level safety requirements in the real world.

Human factors difficulties (a warning may be presented to a pilot as necessary, but that warning may not be noticed on the busy cockpit displays) are another consideration that may make the applicability of the decomposition more challenging.

Ensuring that all necessary details are included in the first high-level need is one possible theoretical solution to this issue. However, it would be difficult to accomplish this in real life. It is inevitable that design choices requiring more specific criteria will be made later in the software development lifecycle. It is not possible to accurately know this detail until that design choice has been made.

The decomposition of safety criteria must always be handled if the program is to be regarded as safe to use.

Requirements Satisfaction

The third software safety assurance principle is:

Principle 3: Software safety requirements shall be satisfied.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

It must be confirmed that a set of “valid” software safety requirements has been met after they have been defined. This set may be assigned software safety requirements (Principle 1), or refined or derived software safety requirements (Principle 2). The fact that these standards are precise, well-defined, and actually verifiable is a crucial need for their satisfaction.

The sorts of verification techniques used to show that the software safety requirements have been met will vary on the degree of safety criticality, the stage of development, and the technology being employed. Therefore, attempting to specify certain verification methodologies that ought to be employed for the development of verification findings is neither practical nor wise.

Mars Polar Lander was an ambitious mission to set a spacecraft down near the edge of Mars’ south polar cap and dig for water ice. The mission was lost on arrival on December 3, 1999.

Given the complexity and safety-critical nature of many software-based systems, it is obvious that using just one type of software verification is insufficient. As a result, a combination of verification techniques is frequently required to produce the verification evidence. Testing and expert review are frequently used to produce primary or secondary verification evidence. However, formal verification is increasingly emphasized because it may more reliably satisfy the software safety standards.

The main obstacle to proving that the software safety standards have been met is the evidence’s inherent limitations as a result of the methods described above. The characteristics of the problem space are the root of the difficulties.

Given the complexity of software systems, especially those used to achieve autonomous capabilities, there are challenges with completeness for both testing and analysis methodologies. The underlying logic of the software can be verified using formal methods, but there are still significant drawbacks. Namely, it is difficult to provide assurance of model validity. Also, formal methods do not deal with the crucial problem of hardware integration.

Clearly, the capacity to meet the stated software safety requirements is a prerequisite for ensuring the safety of software systems.

Software Safety Principles 2 & 3: End of Part 2 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Principles of Software Safety Assurance

This is the first in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

In this first of six blog posts, we introduce the subject and the First Principle.

Introduction

Software assurance standards have increased in number along with the use of software in safety-critical applications. There are now several software standards, including the cross-domain ‘functional safety’ standard IEC 61508, the avionics standard DO-178B/C, the railway application CENELEC-50128, and the automotive application ISO26262. (The last two are derivatives of IEC 61508.)

Unfortunately, there are significant discrepancies in vocabulary, concepts, requirements, and suggestions among these standards. It could seem like there is no way out of this.

However, the common software safety assurance principles that can be observed from both these standards and best practices are few (and manageable). These concepts are presented here together with their justification and an explanation of how they relate to current standards.

These ideas serve as the unchanging foundation of any software safety argument since they hold true across projects and domains. Of course, accepting these principles does not exempt one from adhering to domain-specific norms. However, they:

  • Provide a reference model for cross-sector certification; and
  • Aid in maintaining comprehension of the “big picture” of software safety issues;
  • While analysing and negotiating the specifics of individual standards.

Software Safety Principles

Principle 1: Requirements Validity

The first software safety assurance principle is:

Principle 1: Software safety requirements shall be defined to address the software contribution to system hazards.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

The evaluation and reduction of risks are crucial to the design of safety-critical systems. When specific environmental factors come into play, system-level dangers like unintentional braking release in cars and the absence of stall warnings in aircraft can result in accidents. Although conceptual, software can implement system control or monitoring features that increase these risks (e.g. software implementing antilock braking or aircraft warning functions).

Typically, the system safety assessment process uses safety analysis methodologies like Fault Tree Analysis or Hazard and Operability (HAZOP) Studies to pinpoint how software, along with other components like sensors, actuators, or power sources, can contribute to risks.  The results of these methodologies ought to influence the formulation of safety requirements and their distribution among software components.

It is crucial to remember that software is now considered a black box, utilized to enable specific functions, and with limited visibility into how these functions are implemented. The risk from some system hazards can rise to unacceptable levels if hazardous software failures are not identified and suitable safety standards are not defined and applied.

Examples of software requirements not being adequately defined – and the effects thereof – were reported by the US Federal Drug Authority (FDA).

Simply put, software is a fundamental enabling technology employed in safety-critical systems. Assessing the ways in which software might increase system risks should be a crucial component of the overall system safety process. The definition of safety standards to minimize hazardous software contributions that are discovered in a safety process addresses these contributions.

It is critical that these contributions are described in a clear and testable way, namely by identifying the exact types of software failures that can result in risks. If not, we run the risk of creating generic software safety requirements—or even just correctness requirements—that don’t take into account the specific hazardous failure modes that have an impact on the system’s safety.

Principles of Software Safety Assurance: End of Part 1 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog Safety Management

Proportionality

Proportionality is about committing resources to the Safety Program that are adequate – in both quality and quantity – for the required tasks.

Introduction to Proportionality

Proportionality is a concept that should be applied to determine the allocation of resource and effort to a safety and environmental argument based on its risk.  It is a difficult concept to attempt to distil into a process as each Product, System or Service will have different risks, objectives, priorities and interfaces that make a ‘one size fits all’ approach impossible.

This section describes an approach that may be used to assist in applying the concept of proportionality; it seeks to guide you in understanding where a proportionate amount of effort can be directed, while at the same time maintaining the overriding principle that Risk to Life must be managed.  Regulators require that a proportional approach is used and there are many methods that try to achieve this.  Some focus on the amount of evidence needed to justify a safety argument; some provide more emphasis on the application of activities that are required to make a safety argument and some consider that fulfilling certain criteria can lead to an assessment of risk, but one requirement that is at the centre of any proportional approach is that safety risks are acceptable. 

A fundamental consideration of a proportional approach is considering compliance against assessment criteria.  The Health and Safety Executive’s view is that there should be some proportionality between the magnitude of the risk and the measures taken to control the risk. The phrase “all measures necessary” should be interpreted with this principle in mind. Both the likelihood of accidents occurring and the severity of the worst possible accident determine proportionality.  Application of proportionality should highlight the hazardous activities for which the Duty Holder should provide the most detailed arguments to support the demonstration [that risk is acceptable].

The following considerations may affect proportionality, in a defence context:

  1. Type of consequence;
  2. Severity;
  3. The stage in the Life cycle;
  4. Intended use (CON OPS/Design Intent);
  5. Material state (degradation);
  6. Historical performance;
  7. Cost of safety;
  8. Cost of realising risk;
  9. Public Relations;
  10. Persons at Risk:
    1. 1st,2nd,3rd Party;
    1. Military
    1. Civilian;
    1. Civil Servants;
    1. Contractors;
    1. General public;
    1. VIPs;
    1. Youths;
  11. Volume;
  12. Geographical spread/transboundary.

Some important points that should be noted regarding safety and environmental proportionality approach are that:

  1. Proportionality is inherent to safety and environmental risk assessment (i.e. use of ALARP, BPEO, etc.);
  2. Proportionality is explicitly linked to risk;
  3. Multiple factors need to be considered when deciding a proportional approach;
  4. ASEMS is the mandated safety and environmental framework; therefore, the framework should be applied; it is not possible to develop a proportional approach that negates any part of ASEMS.

Waterfall Approach Process

The model that should be used to consider a proportional approach is intended to provide guidance and should only be used by competent safety and environmental practitioners.  A degree of judgement should be used when answering questions, particularly where a Product, System or Service may easily be classified in more than one category; this is why the use of competent safety and environmental practioners is required.

The waterfall approach model categorises Product, System or Service risk in accordance with factual questions, presented on the left of the diagram below, which are asked about the intended function and operation.  Each question should be used to define the cumulative potential risk, which may be presented by the Product, System or Service.  The Product, System or Service is categorised into one of three risk bands, which align to those defined in the Tolerability triangle, presented in the right of of the diagram.

During the process two initial questions are asked, where an answer of “yes” will automatically result in a categorisation of high risk, regardless of the answer to subsequent questions.  Further refinement is required for lower risk systems to ensure that the system risk is categorised appropriately.

Figure 1, Proportionality Waterfall Approach Model

The diagram above depicts the proportionality waterfall approach model used for the application of ASEMS.

Adherence to ASEMS is mandatory for DE&S.  As such, it is not possible to develop a proportional approach that negates any individual part of ASEMS and so the procedures described in ASEMS Part 2 – Instructions, Procedures and Support should be followed;  where proportionality may be applied is within each General Management Procedure, Safety Management Procedure or Environmental Management Procedure for the allocation of resource, time or effort.

Once the risk category has been established guidance is defined which prescribes the rigour which should be applied to the safety assessment process in terms of Process, Effort, Competence, Output, Assurance (PECOA):

  1. Process – the amount of dedicated/specific process, level of intervention in the organisational structure the Safety and Environmental Management System are established;
  2. Effort – How much time is afforded to the management of risk;
  3. Competence – the level of competence that is required to conducted appropriate assessment and management of safety and environmental;
  4. Output – The detail of evidence and reporting is cognisant to the level of risk;
  5. Assurance – The level of assurance required which shall be applied to the process.

Guidance for the application of PECOA is provided in the table below.  It should be noted that this is indicative guidance for illustrative purposes only. It is a fundamental requirement of ASEMS safety management principles that all safety decisions made should be reviewed, assessed and endorsed by a Safety and Environmental Management Committee to ensure that the Products, Systems and Services categorisation is correct. The diagram below shows the process that may be applied:

Proportionality Process

It should be remembered that using this low/medium and high categorisation could be misleading as the model takes no account of the population or rate of occurrence of the harm. A simple system that can only cause minor injury could still have a high degree of risk if there are lots of people exposed to the risk and the accident rate was high.  Moreover, acceptance of such a situation could lead to the development of an ineffective safety culture or the bypassing of safety mitigation procedures in order to avoid a high accident/minor injury position.  This is where the application of competent safety and environmental advice is essential to ensure that any proportionality model is not slavishly followed at the expense of proper rigour.   Where this model is useful is assisting those safety and environmental professionals to perform a preliminary assessment regarding what Products, Systems or Services are a priority for the allocation of resource, time or effort.

Stage One – System type and Life Cycle Phase

The first question is used to indicate, at a high level, the likely degree of risk for a project.  It should be noted that this is not a definitive assessment and that Products, Systems or Services could move within the model as the safety or environmental evidence is assessed.  There will be a degree of pre-existing assessment which accompanies a Product, System or Service and this may be used to assist with this initial question. 

The safety and environmental assessment process should be closely aligned with the Product, System or Service development process for newly developed Product, System or Services.  Where Products, Systems or Services are in the Concept, Assessment, Development or Manufacture phase of the CADMID/T cycle, they should be accompanied by a safety and environmental assessment process which utilises quantitative assessment techniques.

Where a Product, System or Service sits in the CADMID/T cycle should not influence the rigour of any safety or environmental argument; this model is provided to assist with any determination of the resource, time or effort that may be applied to the evidence to support the argument.  All Risk to Life should be ALARP, with no exception; what changes is the allocation of resources, time and effort to reach that judgement.

Those Products, Systems or Services where the expected worst credible consequence results in, at worst, a single minor injury should automatically be categorised as LOW risk and a qualitative approach may be adopted.

Commercial Off The Shelf or Military Off The Shelf systems should be accompanied by evidence which may be used in the safety and environmental assessment to demonstrate that they are acceptably safe and environmentally compliant, particularly where these are manufactured for use in the EU, where each Product, System or Service should demonstrate compliance with the applicable EU standards.  That the Product, System or Service is Commercial Off The Shelf or Military Off The Shelf is not, in itself, evidence.

Such evidence should include test evidence, trials evidence or a certificate of conformance.  Where a Commercial Off The Shelf or Military Off the Shelf system is already in the in-service phase and it is established that there is sufficient evidence to form a compelling safety argument that the Risk to Life is ALARP, then the system should be categorised as MEDIUM-LOW.  Where the system is also non-complex then it may be categorised as LOW.

Such Commercial Off The Shelf or Military Off the Shelf evidence should only be relied upon where it is established that this evidence is sufficient to demonstrate that the system is acceptably safe and environmentally compliant and already in existence.  The degree and appropriateness of evidence should be established by a Safety and Environmental Management Committee, with particular emphasis upon the quality of the evidence for high-risk systems.  This approach should be undertaken if the Product, System or Service in its entirety is categorised as Commercial Off The Shelf or Military Off the Shelf.  Where only sub-systems or components are Commercial Off The Shelf or Military Off the Shelf, the Product, System or Service should be categorised as bespoke and assessed accordingly.

Stage Two – Risk estimation and System Complexity

Any estimation of the risk that a Product, System or Service is likely to present should be used to further refine its categorisation.  If the worst credible consequence of a Product, System or Service is multiple fatalities then that Product, System or Service should automatically be categorised as HIGH risk.

If the worst credible consequence is a single fatality or multiple severe injuries then the system complexity should be considered further to refine and inform the categorisation.  Complex or novel system designs should have a higher degree of Suitably Qualified Experienced Personnel to conduct the safety and environmental assessment.  Accordingly, those Products Systems or Services which are complex and novel should also be categorised as HIGH whereas those exhibiting a lower degree of complexity might be categorised as MEDIUM.

Notwithstanding this, those Products, Systems or Services thatare in the Concept, Assessment, Development or Manufacture/Termination phase of the CADMID/T cycle should still be supported by a quantitative safety and environmental process.  The only exceptions are those Products, Systems or Services where the worst credible consequence is a single minor injury.  These should be categorised as LOW risk and may be supported by a qualitative safety and/or environmental process.

LOW risk Products, Systems or Services were the worst credible consequence is at worst a single minor injury should be categorised as LOW-MEDIUM risk where the design is complex or novel, those exhibiting a lower degree of complexity should be categorised as LOW risk.

Once the risk category has been established the rigour which should be applied to the safety assessment process in terms of Process, Effort, Competence, Output, Assurance (PECOA) should be defined.  This is summarised below:

Program ScaleLifecycle Stage
Small scale or no Critical FunctionCADMID/TCADMID/TCADMID/T
Large Scale Capital,

Critical Function or bespoke
CADMID/TCADMID/TCADMID/T
AssessmentHighMediumLow
ProcessA rigorous quantitative safety and environmental assessment process should be applied.Consideration should be given to the application of a qualitative safety and environmental assessment process.  Functional safety/environmental assessment may be required, if identified as a risk control measure.A qualitative safety and environmental assessment process should be appropriate for low risk, low complexity systems.
EffortSignificant effort should be expended developing the safety and environmental case.A medium level of effort should apportioned to development of the safety and environmental case, increasing for newly developed systems.A medium level of effort should be apportioned to development of the safety and environmental case.
CompetenceThe safety and environmental assessment and assurance programme should be led by individuals who are experts.  Remaining personnel should be at least Practitioners who should be provided with oversight where appropriate.Personnel engaged in the safety and environmental assessment and approval should be at least practitioners.Personnel engaged in the safety and environmental assessment and approval should be at least supervised practitioners who should be provided with oversight where appropriate.
OutputA safety and environmental case should be developed which includes a safety argument.  The safety assessment process should be substantiated by quantitative evidence.A safety and environmental case should be developed, which should include a safety and environmental argument for all by simplex low risk systems.  The safety assessment process should be substantiated by quantitative evidence for newly developed systems.A safety and environmental statement may be considered for systems, which are low risk and complexity.
AssuranceThe safety and environmental assessment should be independently assured.Independent assurance should be considered and applied to those projects which are considered to be novel or complex.  Assurance may be conducted at Committee level. Independent assurance is not required.
ASEMS GuidanceSafety and Environmental   Dedicated tailored and full implementation of all Clauses, articulated through adherence to all GMPs, SMPs and EMPs.Safety and Environmental   Apply full implementation of all Clauses, in line with guidance provided for the Functional safety/environmental assessment, as required, if identified as a risk control measure and application of GMPs, SMPs and EMPs.Where Project Teams have an overarching Safety and Environmental Management Systems in place:   Safety Gather sufficient evidence to support safety argument and document in a Safety Case/Assessment in accordance with SMP 04050609 and 12     Environmental Gather sufficient information in order to produce Environmental Impact Statement in accordance with EMP 07 – Environmental Reporting.

Process

The type of safety and environmental process which should be applied is dependent both upon the Product System or Service categorisation and the phase of the CADMID/T cycle that the project is in.  Newly developed MEDIUM-LOW to HIGH category Products, Systems or Services which are in the Concept, Assessment, Development or Manufacture phase of the cycle should have a quantitative safety and environmental assessment process applied, the depth and rigour of the assessment should be proportionate to its classification.  LOW risk Products, Systems or Services where the worst credible consequence is anticipated to be no greater than one minor injury may be assessed qualitatively.

A qualitative safety and environmental assessment process should be applied to Products, Systems or Services, which are in the In-Service, Disposal/Termination phase where it is deemed that there is sufficient evidence already in existence to demonstrate that it is acceptably safe.  In these circumstances a qualitative safety and environmental process should be applied to assess the in-service risks.

The approach uses a systematic and logical approach to categorise the resource, time and effort required to support any argument that a Product, System or Service is acceeptably safe or provides no significant damage to teh environment.  It also advocates the application of ASEMS in its entirety, prescribing the level of rigour, which should be applied in terms of process, effort, competence, output and assurance.

Effort

The effort apportioned to the safety and environmental process should be proportionate to the classification of the system.  A significant amount of rigour should be applied to those projects requiring quantitative assessment processes, particularly those with the highest degree of risk and complexity.

If a Product System or Service is assessed to be in a particularly low category and is simple it may not be necessary to undertake the full scope of risk management procedures.  In these circumstances a certificate of conformance may be sufficient, which may be supported by statement to that effect from the Safety and Environmental Management Committee.

All decisions made regarding the evidence required to justify a safety argument (regardless of risk) should be endorsed by a Safety and Environmental Management Committee.  If this is decision is delegated further for those Products, Systems or Services that are low risk is for the Duty Holder to determine as all decisions regarding to Risk to Life are made on their behalf.

Competence

The safety and environmental lead should be an expert for HIGH category projects or for MEDIUM category projects where the Product System or Service is particularly complex or a novel design.  The remaining personnel engaged on such projects should be at least practitioner level.  A competency assessment should be undertaken which should be endorsed by a Safety and Environmental Management Committee.

The safety and environmental lead for MEDIUM category projects should be at least practitioner level.  The remaining personnel engaged on such projects should be practitioner or supervised practitioner where appropriate supervision is in place.  A competency assessment should be undertaken which should be endorsed by a Safety and Environmental Management Committee.

The safety and environmental lead for LOW category projects should be at least practitioner level or a supervised practitioner with appropriate supervision in place.

Competency requirements relating to specific safety and environmental processes defined in ASEMS should be applied where those processes are undertaken.

Output

A safety and environmental case should be developed for HIGH category projects which includes a safety and environmental argument, developed using Claims Arguments Evidence (CAE) or Goal Structuring Notation (GSN).  The argument should be substantiated by quantitative evidence such as reliability data or the output from quantitative safety assessment processes.

A safety and environmental case should be developed for MEDIUM category projects which includes a CAE or GSN safety argument.  The quality and depth of evidence required to substantiate the safety and environmental argument should be proportionate to the classification of the Product System or Service.   Products, Systems or Services with increased complexity or higher degrees of risk should be substantiated by quantitative evidence

A Safety and environmental case should be developed for MEDIUM-LOW category Products, Systems or Services.  A safety and environmental argument should be included for those Products, Systems or Services which are particularly complex or novel or those which exhibit an increased degree of risk

A Safety and environmental case should be developed for MEDIUM-LOW category Products, Systems or Services.  A safety and environmental argument should be included for those Products, Systems or Services which are particularly complex or novel or those which exhibit an increased degree of risk.

A safety and environmental case or Safety and environmental statement should be developed for LOW category Products, Systems or Services.  A certificate of conformance may be adequate for the lowest risk simple Products, Systems or Services

All decisions made regarding the evidence required to justify a safety argument (regardless of risk) should be endorsed by a Safety and Environmental Management Committee.  If this is decision is delegated further for those Products, Systems or Services that are considered to fall in the low category, then it is for the Duty Holder to determine (as all decisions regarding to Risk to Life are made on their behalf) whether to acept the risks or not.

Assurance

HIGH and MEDIUM category projects should be independently reviewed by a Safety and Environmental Auditor.  The degree of Independent Safety and Environmental Auditor engagement should be proportionate to the project categorisation.

MEDIUM-LOW category projects should be independently reviewed by a Safety and Environmental Auditor where the safety and assessment processes applied are novel or complex.  Justification should be provided where an Independent Safety and Environmental Auditor is not appointed.

It is not necessary for projects categorised LOW to be independently reviewed.

It should be remembered that it is not prudent to take any form of autocratic system or approach without sufficient validation, verification and endorsement by competent and duly authorised individuals, who are considered Suitably Qualified and Experienced Personnel for the role.  Endorsement of decisions should be made by a competent panel or committee, as part of the overall hazard analysis and risk assessment and any variation in opinion from that presented by any proportionality model should be managed by such a panel.

If you found this post on Proportionality helpful, please leave a review.

If this post is missing something you wanted, please let me know!

Categories
Blog Work Health and Safety

Australian vs. UK Safety Law

This post, Blog: Australian vs. UK Safety Law compares the two approaches, based on my long experience of working on both sides.

Are you a safety professional thinking of emigrating from the UK to Australia?  Well, I’ve done it, and here’s my BREXIT special guide!  In this 45-minute video, The Safety Artisan looks at the similarities and differences between British and Australian safety practices.  This should also help Aussies thinking of heading over to work in the UK and even, dare I say it, to the EU…

“It’s beginning to look a lot like BREXIT! La, La-la, la, la…”

Blog: Australian vs. UK Safety Law, Key Points

  • Introduction. With BREXIT looming, British and Australian professionals may be thinking of working in each other’s countries;
  • Legislation. Our laws, regulations and codes of practice are quite similar;
  • Guidance. Try the UK Health and Safety Executive (HSE) or the Safe Work Australia websites – both are excellent;
  • Jurisdictions. This is complex in a federated state like Australia, so Brits need to do their homework;
  • Regulators. This varies by industry/domain – many are very similar, while some are quite different;
  • Cultural Issues: Australia vs. the UK. Brits and Aussies are likely to feel quite comfortable working in each other’s countries; and
  • Cultural Issues: Australia vs. the EU. There are some commonalities across the EU, but also dramatic differences.

Blog: Australian vs. UK Safety Law: The Transcript

Click Here for the Transcript

Comparing Australian & UK Safety Law: Topics

This is a free full-length show. I think it’s going to be about 30 minutes just to let you know; in those 30 minutes, we’re going to compare the British and Australian approaches to safety. We’re going to talk about the similarities and differences between Australian and British legislation. On the safety guidance that’s available from the various authorities the different jurisdictions in the UK and Australia. Jurisdiction is not really an issue in the UK but certainly is in Australia, so that’s something we really need to go through.

We’ll talk about regulators and the different approaches to regulation. And, finally, some cultural issues. I may mention the dreaded EU. It’s worth talking a little bit about that too because there are still significant links between the EU and the UK on how safety is done which Australians might find helpful.

Introduction

Now, where’s Michael Bublé when I need him to sing the song? It says it’s looking a lot like Brexit. With the Conservatives winning in the UK they’ve passed the Brexit act. It looks like it’s finally going to happen. Now whether you think that’s a good idea or not I’m not going to debate that, you’ll be pleased to hear – you’re sick of that, I’m sure.

There are going to be some safety professionals and other engineering professionals who were working in the EU. And who maybe won’t be able to do so easily anymore, and there might be some Brits thinking well maybe this is an opportunity. This is a prompt for me to think about moving to Australia and seeing what life is like there. Conversely, there may be Aussies seeking opportunities in the UK because if the flow of professionally qualified Engineers and so forth from the EU countries dries up or slows down then there might be more opportunity for Aussies. Indeed, the UK has been talking about introducing an Australian-style points-based immigration system. And I think we might see a favourable treaty between UK and Australia before too long.

What have I got to contribute here? I spent quite a few years in the UK as a safety engineer and safety consultant and I worked on a lot of international projects. I worked on a lot of UK procurements of American equipment. And I also worked very closely with German, Italian and Spanish colleagues on the Eurofighter Typhoon for thirteen years on and off. And I have quite a bit of experience of working in Germany and some of working with the French. I’ve got I think quite a reasonable view of different approaches to safety and how the UK differs from and is like our European counterparts.

Also, seven years ago I emigrated to Australia. I went through that points-based process, fortunately with a firm to back me up. I made the transition from doing UK-style safety to Australian-style safety.

Let’s get on with it.

Legislation #1

There are very many similarities between Australian and UK approaches to safety. Australia has learned a lot from the UK and continues to be very close to the UK in many ways, particularly in our style of law and legislation. But there are differences and I’m mainly going to talk about the differences.

First of all in the UK we’ve had the Health And Safety At Work (HSAW) Act around since 1974. That’s the executive Act that sets up the Health and Safety Executive the HSE as a regulator, gives it teeth and enables further legislation and regulations. Now if I was still in the UK, the next thing we would talk about would be in any discussion about health and safety at work would be the ‘six-pack’.

Now, these were six EU directives that the UK converted into UK regulations, as indeed all EU member states were required to. Incidentally, the UK was very successful in influencing EU safety policy, so it’s a bit ironic that their turning their back on that.  What will you find in the six-pack?

First of all, the regulations on management of health and safety at work otherwise known as HSG65 and there’s a lot of good advice in there on how to do risk management that is broadly equivalent, for an Aussie audience, to the Risk Management Code Of Practice: similar things in there that it’s trying to achieve. Then we’ve got the Provision and Use of Work Equipment Regulations or PUWER for short. That says if you provide equipment for workers it’s got to be fit for purpose. Then there are regulations on manual handling, on workplace health safety and welfare, on personal protective equipment at work, and on the health and safety of display screen equipment of the kind that I’m using here and now (I’m sat in my EU-standard computer chair with five legs and certain mandatory adjustable settings).

Now Aussies will be sat there looking at this list thinking it looks awfully familiar. We just package them up slightly differently.

There’s also, it should be said, a separate act called the Control Of Major Accident Hazards or COMAH as it’s known. And that was introduced after the Piper Alpha disaster in the North Sea which claimed 167 lives in a single accident. That covers big installations that could cause a mass-casualty accident. So that’s the UK approach.

Legislation #2

Now the Australian approach is much simpler. The Aussies have had time to look at UK legislation, take the essentials from it and boil it down in into its essence quite cleverly. There is a single Work Health and Safety (WHS) Act, which was signed up in 2011 and came into force on the 1st of January 2012. And there are a single set of WHS Regulations that go hand in hand with the Act.

And they cover a wide spectrum of stuff. A lot of the things in the UK that you would see covered in different acts and different regulations are all covered in one place. Not only does it address, as you would expect, the workplace responsibilities of employers and employees etc., but there are also upstream duties on designers and manufacturers and suppliers and importers and so forth. The WHS act pulls all these things together quite elegantly into one.

It’s a very readable act. I have to say it’s one of the few pieces of legislation that I think a non-lawyer can read and make sense of. But you’ve got to read what it says not what you think it says (just a word of caution).  The regulations cover Major Hazard Facilities, rather like the COMAH regulations, so they’re all included as well.

It’s worth noting that Australian WHS, unlike the UK, does not differentiate between safety and security. If somebody gets hurt, then it doesn’t matter whether it is an accident or whether it was a malicious act. If it happens to a worker, then WHS covers it. And that puts obligations on employers to look after the security of workers, which is an interesting difference, as the UK law generally does not do that. We’re seeing more prosecutions (I’m told by the lawyers) for harm caused by criminal acts than we are yet seeing for safety accidents.

And that’s the act and regulations. And it’s also worth saying that Australia has a system of Codes Of Practice just as the UK has Approved Codes Of Practice. Now that’s all I’m going to say for now. There are other videos and resources on the website that go into the Act and Regulations and COP. I’m going to do a whole series on all those things, unpacking them one by one.

Legislation #3

Let’s think about exceptions for a moment because the way that the UK and Australia do exceptions in their Health and Safety legislation is slightly different. In the UK, the Health and Safety at Work Act explicitly does not apply to ships and aircraft moving under their own power. That’s quite clear. That kind of division does not occur in Australia.

Also, the UK Health and Safety Act does not apply to special forces, or to combat operations by the armed forces, or to the work up to combat operations. Again, those exclusions do not exist in Australia. And then it’s also worth saying there are many other acts enforced by the UK HSE. It’s not just about HSAW, the six-pack and COMAH. There’s a lot of regs and stuff on mining and offshore, etc., you name it. The UK is a complex economy and there are lots of historical laws. Going back up to 100 years. I think the Explosives Act was in 1898, which is still being enforced.

Now Australia has a different approach. They’ve made a clean sweep; taken a very different approach as we’ll see later. And there are only really three explicit exclusions to the Act. It says that WHS doesn’t apply to merchant ships, which are covered by the Occupational Health and Safety (Maritime Industry) Act. So, merchant ships aren’t covered, and WHS doesn’t apply to offshore petroleum installations either. More on that later.

There is a separate act that deals with radiation protection, and that is enforced by the ARPANSA, the Australian Radiation and Nuclear Safety Protection Agency. So, [HSAW and WHS have] a slightly different approach to what is covered and what is not; but very similar in the essentials.

Legislation #4

One of those essentials is the determination of how much safety is enough. In the UK the HSE talks about ALARP and in Australia the Act talks about SFARP. This quote here is directly from the UK HSE website. Basically, it says that ALARP and SFARP are essentially the same things. And the core concept, what is reasonably practicable, is what’s defined in the WHS Act.

Now it’s worth mentioning that the HSE say, this because it was the HSE who invented the term ALARP. If you look in UK legislation you will see the term SFARP, and you’ll see other terms like ‘all measures necessary’. There are various phrases in UK laws to say how much is enough, and the HSE said it doesn’t matter what it says in the law, the test we will use is ALARP and it covers all these things. It was always intended to be essentially the same as SFARP.

Now there is some controversy in Australia about that, and some people think that ALARP and SFARP are different. The truth is that in Australia, as in the UK, some people did ALARP badly. They did it wrong. If you do ALARP wrong, it’s not the same as SFARP, it’s different. But if you’re doing ALARP properly it is the same. Now, there are some people who will die in a ditch in order to disagree with me over that but I’m quoting you from the HSE, who invented the term to describe SFARP.

It’s also worth noting that WHS uses the term SFARP, but the offshore regulator, which is the National Offshore Petroleum Safety and Environmental Management Agency (NOPSEMA), they use the term ALARP, because they’ve got a separate act from WHS for enforcing safety on offshore platforms. But again, even though they’re using ALARP, it’s the same as SFARP, if you look at the way that NOPSEMA explain ALARP.  They do it properly. And it matches up with SFARP, in fact, that NOPSEMA guidance is very good.

Guidance

We’ll talk more on regulators, but first a little aside and you’ll see why in a moment. Before we can get to talking about regulators, I need to tell you about where you can get guidance in Australia.

Now in the UK, you’ve got the HSE, who is the regulator and they also provide a lot of guidance. Any safety Engineer in the UK will immediately think of a document called R2P2, which is short for ‘Reducing Risk, Protecting People’. That’s an 80-something page document, in which the HSE explain their rationale for how they will enforce safety law and safety regulations and what they mean by ALARP and so on. There’s also a lot of guidance on their website as well, which is excellent and available under a Creative Commons licence so you can do an awful lot with it.

In Australia, it’s a little bit more complex than that. The WHS act was drafted by Safe Work Australia, which is a statutory agency of the government. It’s not a regulator, but it was the SWA who developed the Model WHS Act, the Model Regulations and the Model Codes Of Practice. (More on that in just a second.) It’s Safe Work Australia that provides a lot of good guidance on their website.

Most Australian regulators will refer you to legislation [i.e. not their own guidance]. We’ve got a bit of an American approach in that respect in Australia, in that you can’t do anything without a lawyer to tell you what you can and can’t do. Well, that’s the way that some government agencies seem to approach it. Sadly, they’ve lost the idea that the regulator is there to bridge the gap and explain safety to ordinary people so they can just get on with it.

Now some regulators in Australia, particularly say the New South Wales state regulator or Victorian state regulator do provide good guidance for use within their jurisdiction. The red flashing lights and the sirens should be going off at this point because we have a jurisdiction issue in Australia, and we’ll come onto that now.

Jurisdictions

In the UK, it’s reasonably simple. You’ve got the HSE for England and Wales, you’ve got the HSE for Scotland and you’ve got the HSE for Northern Ireland. They are enforcing essentially the same acts and the same regulations, right across the United Kingdom. Now there are differences in law: England and Wales have a legal system; Scotland has a slightly different legal system; then Northern Ireland has peculiarities of its own. But they’re all related. There are historical reasons why the law is different, but, from a safety point of view, all those three regulators do the same thing. And work consistently.

In Australia, it’s a bit different. Australia is a Federated Nation. We have States and Territories as you can see, we’ve got Queensland, New South Wales and Victoria. Within New South Wales we’ve got the ACT, that’s the Australian Capital Territory, and Canberra is the Australian Federal capital.

Most Australians live on that East Coast, down the coast of Queensland NSW and Victoria. Then we’ve got Tasmania, South Australia, the Northern Territory and Western Australia. All those states and territories have and enforce their own Safety Law and Regulations.

On top of that, you’ve got a Federal approach to safety as well. Now, this will be a bit of a puzzle to Brits, but in Australia, we call the national government in Canberra ‘the Commonwealth’. Brits are used to the Commonwealth being 100+ countries that used to belong to the UK, but now they’re a club. But in Australia, the Commonwealth is the national government, the Federal Government.

Regulators #1

Let’s talk about regulators, starting at the national level. If you look at the bottom right-hand corner, we have got Comcare. They are the national regulator, who enforce WHS for The Commonwealth of Australia, [Which is] all Federal workplaces, Defence, any land that’s owned by The Commonwealth, and anything where you’ve got a national system. You’ve also got some nationalised or semi-nationalised industries that effectively belong to the Commonwealth, or are set up by national regulations, and they operate to the Commonwealth version of WHS

Then you’ve got the Northern Territory, Tasmania, South Australia, Queensland, New South Wales and the Australian Capital Territory. All those states and territories have their own versions of the Model WHS Act, Regulations and COP. They’re not all identical but they’re pretty much the same. There are slight differences in the way that things are enforced, for example in South Australia there’s a couple of Codes Of Practice that Work Safe SA have said they will not enforce.

These differences don’t change the price of fish. All these regulators have their own jurisdiction, and they’re all doing more or less the same thing as Commonwealth WHS. If you start with the Model WHS Act or the Commonwealth version, then you won’t be far off what’s going on in those states and territories. However, you do have to remember that if you’re doing non-Commonwealth work in those states and territories, you’re going to be under the jurisdiction of the local state or territory regulator.

That’s the easy bit!

Unfortunately, not all states have adopted WHS yet. Western Australia (bottom left-hand corner) they are going to implement WHS but it’s not there yet. Currently, in December 2019 they’re heading towards WHS, but they’re still using their old Occupational Health and Safety (OS&H) Legislation from about 1999, I think.

Victoria has decided that they’re not going to implement WHS. Even though everybody agreed they would [change to WHS], they’re going to stick with their Occupational Health and Safety at work Act, which again I think dates from something like 1999. (These acts are amended and kept up to date.)  Victoria has no plans to implement WHS.

You, like me, might be thinking what a ridiculous way this is to organise yourself. We’re a nation of less than twenty-five million people, and we’ve got all this complexity about regulators and how we regulate and yes: it is daft! Model WHS was an attempt to get away from that stupidity. I have to say it’s mostly been successful, and I think we will get there one day, but that’s the situation we’ve got in Australia.

Regulators #2

Now, a quick little sample of regulators in the UK and Australia just to compare. I can’t go through them all, because there are a lot. I wanted to illustrate the similarities and differences; there are many similarities for Brits coming to Australia or Aussies going to the UK. You will find a regulatory system that in most part looks and feels familiar.

In the UK, for example, you’ve got the Civil Aviation Authority, who regulate non-military flying, airports etc; in Australia, you’ve got the Civil Aviation Safety Authority, which does almost the same thing. In the UK you’ve got the Air Accident Investigation Branch, who do what their name implies; in Australia, you’ve got the Australian Transportation Safety Bureau, who also investigates air accidents (they do maritime accidents as well). By the way, the ATSB in Australia is somewhat modelled on the American ATSB, with a very similar approach to the way they do business.

Now when we get onto the maritime side, it’s quite different. In the UK, you’ve got the Maritime and Coastguard Agency or MCGA. They regulate Civil Maritime Traffic and health and safety on merchant ships; they also investigate accidents. In Australia, don’t forget we’ve got the ATSB looking at maritime accidents and publishing statistics. We’ve then got the Australian Maritime Safety Authority, the AMSA, who look at the design aspects of safety of ships. (These are all national / Federal / Commonwealth regulators, by the way.) You’ve then got ‘Sea Care’, who look at the OH&S workplace aspects of working on merchant ships.

Then separately [again] we’ve got the National Offshore Petroleum Safety and Environmental Management Authority NOPSEMA, who look after oil rigs and gas rigs, that sit more than three nautical miles offshore. Because if they’re inside three nautical miles then that’s the jurisdiction of the local state or territory.

Indeed, NOPSEMA is evidence of the Federal government trying to get all the states and territories to come together.  They succeeded with WHS but with the offshore stuff, the states and territories refused to cooperate with the Commonwealth. (This is a common theme in Australia. The different branches of the government seem to delight in fighting each other rather than serving the Australian public.) The Commonwealth decided Australia could not develop an offshore industry on this basis – it wasn’t going to happen. So, they unilaterally set up NOPSEMA. Bang. Suck on that states and territories.

Culture

Let’s look a little bit at culture. Let’s face it, Australians, Brits and Americans in many ways are very similar. We have an Anglo-Saxon approach to things, and Australian and British law is very similar. We also have a similar sense of humour, which is very important when trying to do safety

You’ve got the five eyes countries – Australia, New Zealand, the UK, the US and Canada – who have worked closely together for several decades. There’s a lot of commonality between these English-speaking countries that have a common Anglo-Saxon colonial past.

However, the big difference in Australia is that we are much more heavily influenced by the US than the UK is. You’ll find a lot of a US-style ‘certification against specification’ in Australia in different industries. That’s subtly different to the UK and Australian legal approach, which is based on ‘safety by intent’. This idea is that safety is achieved by keeping people safe [managing risk in the real world], where a contract specification means very little. Are people kept safe? That’s the essential idea behind UK and Australian law. It’s a bit that’s a bit different to the sort of American approach of you know specifications and requirements.

There’s nothing wrong with either approach, they’re just different, but mixing them together does cause confusion. In the UK if you work, as I did for most of my working life, in the aviation industry, it is an international enterprise and it uses a US-style safety-by-specification and certification approach because civil aviation is essentially US-led. (From the 1944 Chicago convention onwards.) It’s important to understand the difference, and there’s a lot more of this US certification influence in Australia.

Comparing Australian & UK Safety Law: Summary

We’ve talked about some different aspects. I can’t go into detail on everything, as I simply don’t know all the details on everything, as I’m not an expert in it all domains. Nobody is. But I hope I’ve given you a useful overview of differences for British engineers wanting to be aware of safety in Australia, and Aussies wanting to go to the UK.

Cultural Issues: UK versus the EU

It’s also worth having, while we’re on the subject, just one slide on the EU, because the UK has been part of the EU for a long time. UK legislation has been heavily influenced by the EU and vice versa. As I said earlier, the UK has been quite successful in influencing EU directives, which the UK that turns into regulations as the other EU nations do. That’s the second bullet point. If you go work in the EU, you should find local laws that implement the EU directives in common with the UK.

The big difference between the UK and the other EU states is the ALARP measure of how much safety is enough, and that is unique to the UK. So much so, that other EU nations took the UK to the European Court of Arbitration saying that ALARP was a sort of anti-competitive variation that shouldn’t be allowed. Now, they lost and ALARP stands in the UK, but just illustrates that there are some critical differences and ALARP is probably the most important one.

Back to the first bullet point. In English, we differentiate between safety and security. Now I’ve mentioned the UK HSAW does so but WHS does not do that (deliberately I guess), whether it’s accidental or harm or malicious harm you’ve got to protect your workers. However, in many European countries, the word for safety and security are the same. If you get to Germany, ‘Sicherheit’ means safety and security. In France it’s ‘securité’ and variations thereof in other romance languages, safety and security are the same words in many European languages.

Now having said that, a lot of these EU economies where you might be thinking of working, are modern economies with lots of internationally regulated stuff going on. The aviation industry, for example, but there are lots of advanced industries that are regulated in a similar way, right around the world. You’ll still find familiar concepts in different EU countries.

Now culturally, I’ve spent a lot of time working with Germans, who tend to come unstuck with the Anglo-Saxon approach to safety, because they have the mentality that they make things to work, not to fail. For German engineers especially, the Anglo-Saxon fixation with looking at how things could go wrong seems very strange. They often just don’t get it unless they’ve been in an industry like aviation, where that approach has been inculcated into them. Germans often don’t understand Australian WHS, because it’s just not their mentality. (They don’t build things to fail, they build them to work, so maybe ‘Safety-II’ will take off in Germany because of that.)

In France, I have to say the French are extremely competent engineers and they’re very good at safety. However, they do it their way they do it the French way, which is different to UK/Australia. Don’t expect the French to do it our way. They’re going to do it their way, and you need to learn, to understand what they do, how they do it and why they do it that way. France is in many ways a very nationalized country and it’s a national enterprise. Most engineers go through one system, and there is one top college for engineering in France.

There’s one and only one way of doing it in France, which may come as a bit of a shock to Aussies given our somewhat ‘here and there’ approach to regulation in Australia. The French are competent but don’t expect them to comply with the Aussie or UK way of doing things.

Now, I’ve said ‘variations across Southern Europe’, and I’m trying to be tactful here because a lot of the southern European approach to Safety is very variable. Sometimes I’ve been very impressed watching how, say, the Spanish do business, but in other countries like Italy the approach to safety can be a bit of a shocker. If you’re buying stuff from Italy, the contract may say they’ll do ‘x y z’ and they’ll produce safety reports. Just because they’ve said so, doesn’t mean a that it’s going to happen or that the stuff they produce is going to be worth the paper it’s written on, quite frankly. Some countries are very good in certain areas, but not so much in others.

Copyright Statement

Well, thanks for listening!  This presentation contains a little bit of information from the UK HSE and some from Safe Work Australia and I’ve produced that under the [appropriate] Creative Commons licenses. If you go to The Safety Artisan website you will see the details of the licenses.

The content of this video presentation is copyright The Safety Artisan, 2019.

[Please SUBSCRIBE to The Safety Artisan YouTube channel to see free training videos and free previews of paid content.].

It just remains for me to say stay safe and I’ll see you next month. Goodbye!

Blog: Australian vs. UK Safety Law: The End!

Back to the WHS Topic Page.