Categories
Blog software safety

SW Safety Principles Conclusions and References

SW Safety Principles Conclusions and References is the sixth and final blog post on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Conclusion

These six blog posts have presented the 4+1 model of foundational principles of software safety assurance. The principles strongly connect to elements of current software safety assurance standards and they act as a common benchmark against which standards can be measured.

Through the examples provided, it’s also clear that, although these concepts can be stated clearly, they haven’t always been put into practice. There may still be difficulties with their application by current standards. Particularly, there is still a great deal of research and discussion going on about the management of confidence with respect to software safety assurance (Principle 4+1).

[My own, informal, observations agree with this last point. Some standards apply Principle 4+1 more rigorously, but as a result, they are more expensive. As a result, they are less popular and less used.]

Standards and References

[1] RTCA/EUROCAE, Software Considerations in Airborne Systems and Equipment Certification, DO-178C/ED-12C, 2011.

[2] CENELEC, EN-50128:2011 – Railway applications – Communication, signaling and processing systems – Software for railway control and protection systems, 2011.

[3] ISO-26262 Road vehicles – Functional safety, FDIS, International Organization for Standardization (ISO), 2011

[4] IEC-61508 – Functional Safety of Electrical / Electronic / Programmable Electronic Safety-Related Systems. International Electrotechnical Commission (IEC), 1998

[5] FDA, Examples of Reported Infusion Pump Problems, Accessed on 27 September 2012,

http://www.fda.gov/MedicalDevices/ProductsandMedicalProcedures/GeneralHospitalDevicesandSupplies/InfusionPumps/ucm202496.htm

[6] FDA, FDA Issues Statement on Baxter’s Recall of Colleague Infusion Pumps, Accessed on 27 September 2012, http://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm210664.htm

[7] FDA, Total Product Life Cycle: Infusion Pump – Premarket Notification 510(k) Submissions, Draft Guidance, April 23, 2010.

[8] “Report on the Accident to Airbus A320-211 Aircraft in Warsaw on 14 September 1993”, Main Commission Aircraft Accident Investigation Warsaw, March 1994, http://www.rvs.unibielefeld.de/publications/Incidents/DOCS/ComAndRep/Warsaw/warsaw-report.html  Accessed on 1st October 2012.

[9] JPL Special Review Board, “Report on the Loss of the Mars Polar Lander and Deep Space 2 Missions”, Jet Propulsion Laboratory”, March 2000.

[10] Australian Transport Safety Bureau. In-Flight Upset Event 240Km North-West of Perth, WA, Boeing Company 777-2000, 9M-MRG. Aviation Occurrence Report 200503722, 2007.

[11] H. Wolpe, General Accounting Office Report on Patriot Missile Software Problem, February 4, 1992, Accessed on 1st October 2012, Available at: http://www.fas.org/spp/starwars/gao/im92026.htm

[12] Y.C. Yeh, Triple-Triple Redundant 777 Primary Flight Computer, IEEE Aerospace Applications Conference pg 293-307, 1996.

[13] D.M. Hunns and N. Wainwright, Software-based protection for Sizewell B: the regulator’s perspective. Nuclear Engineering International, September 1991.

[14] R.D. Hawkins, T.P. Kelly, A Framework for Determining the Sufficiency of Software Safety Assurance. IET System Safety Conference, 2012.

[15] SAE. ARP 4754 – Guidelines for Development of Civil Aircraft and Systems. 1996.

Software Safety Principles: End of the Series

This blog post series was derived from ‘The Principles of Software Safety Assurance’, by RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Assurance and Standards

This post, Software Safety Assurance and Standards, is the fifth in a series of six blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Relationship to Existing Software Safety Standards

The ideas of software safety assurance discussed in this article are not explicit in most software safety standards, though they are typically present. However, by concentrating only on adherence to the letter of these standards, software developers using these standards are likely to lose sight of the primary goals (e.g. through box-ticking). We look at manifestations of each of the Principles in some of the most popular software safety standards below – IEC 61508, ISO 26262, and DO 178C.

Principle 1

IEC 61508 and ISO 26262 both demonstrate how hazard analysis at the system level and software safety criteria have been linked. High-level requirements that address system requirements assigned to software to prevent system risks must be defined, according to DO-178C. Particularly when used in conjunction with companion standard ARP 4754, this addresses Principle 1.

[In military aviation, I’m used to seeing Do-178 used in conjunction with Mil-Std-882. This also links hazard analysis to software safety requirements, although perhaps not as thoroughly as ARP 4754.]

Principle 2

Traceability in software needs is always required. The standards also place a strong emphasis on the software requirements’ iterative validation.

Specific examples of requirements decomposition models are provided by DO-178C and ISO26262. Capturing the justification for the required traceability is an area where standards frequently fall short (a crucial aspect of Principle 2).

What is particularly lacking is a focus on upholding the purpose of the software safety rules. Richer types of traceability that take the requirements’ purpose into account rather than just syntactic ones at various phases of development are needed for this.

Principle 3

The basis of the software safety standards is guidance on requirement satisfaction. Although there are distinct disparities in the advised methods of pleasure, this principle is generally thoroughly addressed (for example DO-178 traditionally placed a strong emphasis on testing).

[Def Stan 00-55 places more emphasis on proof, not just testing. However, this onerous software safety standard has fallen out of fashion.]

Principle 4

This requires that the absence of mistakes introduced during the software lifetime be demonstrated. Aspects of this principle can be seen in the standards. However, of all the standards, the software hazard analysis part receives the least attention.

[N.B. The combination of Mil-Std-882E and the Joint Software Systems Safety Engineering Handbook places a lot of emphasis on this aspect.]

The standards imply that system-level safety analysis is a process. The purpose of software development is to prove that requirements, including safety requirements assigned to software, as produced by system-level procedures, are correct. At later phases of the development process, these criteria are refined and put into practice without explicitly applying software hazard analysis.

There is no specific requirement in DO 178C to identify “emerging” safety risks during software development, but it does permit recognized safety issues to be transmitted back to the system level.

Principle 4+1

All standards share the idea of modifying the software assurance strategy in accordance with “risk.” However, there are significant differences in how the software’s criticality is assessed. IEC 61508 establishes a Safety Integrity Level based on the probability delta in risk reduction, DO-178B emphasizes severity, and ISO 26262 adds the idea of the vehicle’s controllability. At various levels of criticality, the suggested strategies and processes vary greatly as well.

[ The Mil-Std-882E approach is to set a ‘level of rigor’ for software development. This uses a combination of mishap severity and the reliance placed on the software to set the level.]

Software Safety Assurance and Standards: End of Part 5 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Assurance

Software Safety Assurance is the fourth in a new series of six blog posts on Principles of Software Safety Assurance. In them, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Software Assurance = Justified Confidence

[The original authors referred to Principle 4+1 as ‘confidence’, but this term is not well recognized, so I have used ‘assurance’. The two terms are related. Both terms get us to ask: how much safety is enough? This is also the topic addressed in my blog post on Proportionality.]

Principle 4+1:

The confidence established in addressing the software safety principles shall be commensurate to the contribution of the software to system risk.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

All safety-related software systems must adhere to the four aforementioned principles. To prove that each of the guiding principles has been established for the software, evidence must be presented.

Depending on the characteristics of the software system itself, the dangers that are present, and the principle that is being shown, the proof may take many different forms. The strength and quantity of the supporting evidence will determine how confidently or assuredly the premise is established.

Therefore, it’s crucial to confirm that the level of trust developed is always acceptable. This is frequently accomplished by making sure that the level of confidence attained corresponds to the contribution the software makes to system risk. This strategy makes sure that the areas that lower safety risk the most receive the majority of attention (when producing evidence).

This method is extensively used today. Many standards employ concepts like integrity or assurance levels to describe the amount of confidence needed in a certain software function.

Examples

The flight control system for the Boeing 777 airplane is a Fly-By-Wire (FBW) system … The Primary Flight Computer (PFC) is the central computation element of the FBW system. The triple modular redundancy (TMR) concept also applies to the PFC architectural design. Further, the N-version dissimilarity issue is integrated to the TMR concept.

Details are given of a ‘special case procedure’ within the principles’ framework which has been developed specifically to handle the particular problem of the assessment of software-based protection systems. The application of this ‘procedure’ to the Sizewell B Nuclear Power Station computer-based primary protection system is explained.

Suitability of Evidence

Once the essential level of confidence has been established, it is crucial to be able to judge whether it has been reached. Several factors must be taken into account when determining the degree of confidence with which each principle is put into practice.

The suitability of the evidence should be taken into consideration first. The constraints of the type of evidence being used must be considered too. These restrictions will have an impact on the degree of confidence that can be placed in each sort of evidence with regard to a certain principle.

Examples of these restrictions include the degree of test coverage that can be achieved, the precision of the models employed in formal analysis approaches, or the subjectivity of review and inspection. Most techniques have limits on what they can achieve.

Due to these limitations, it could be necessary to combine diverse types of evidence to reach the required degree of confidence in any one of the principles. The reliability of each piece of evidence must also be taken into account. This takes into account the degree of confidence in the item of evidence’s capacity to perform as expected.

This is also frequently referred to as evidence rigor or evidence integrity. The rigorousness of the technique employed to produce the evidence item determines its reliability. The primary variables that will impact trustworthiness are Tools, Personnel, Methodology, Level of Audit and Review, and Independence.

The four software safety principles will never change. However, there is a wide range of trust in how those principles are developed. We now know that a determination must be made regarding the degree of assurance required for any given system’s principles to be established. We now have our guiding principle.

Since it affects how the previous four principles are put into practice, this concept is also known as Principle 4+1.

Software Safety Assurance: End of Part 4 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.

Categories
Blog software safety

Software Safety Principles 2 and 3

Software Safety Principles 2 and 3 is the second in a new series of blog posts on Principles of Software Safety Assurance. In it, we look at the 4+1 principles that underlie all software safety standards.

We outline common software safety assurance principles that are evident in software safety standards and best practices. You can think of these guidelines as the unchanging foundation of any software safety argument because they hold true across projects and domains.

The principles serve as a guide for cross-sector certification and aid in maintaining comprehension of the “big picture” of software safety issues while evaluating and negotiating the specifics of individual standards.

Principle 2: Requirement Decomposition

The second software safety principle is:

Principle 2: The intent of the software safety requirements shall be maintained throughout requirements decomposition.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

The requirements and design are gradually broken down as the software development lifecycle moves forwards, leading to the creation of a more intricate software design. The term “derived software requirements” refers to the criteria that were derived for the more intricate software design. The intent of those criteria must be upheld as the software safety requirements are broken down once they have been established as comprehensive and accurate at the highest (most abstract) level of design.

An example of the failure of requirements decomposition is the crash of Lufthansa Flight 2904 at Warsaw on 14 September 1993.

In essence, the issue is one of ongoing requirements validation. How do we show that the requirements expressed at one level of design abstraction are equal to those defined at a more abstract level? This difficulty arises constantly during the software development process.

It is insufficient to only consider requirements fulfillment. The software safety requirements had been met in the Flight 2904 example. However, they did not match the intent of the high-level safety requirements in the real world.

Human factors difficulties (a warning may be presented to a pilot as necessary, but that warning may not be noticed on the busy cockpit displays) are another consideration that may make the applicability of the decomposition more challenging.

Ensuring that all necessary details are included in the first high-level need is one possible theoretical solution to this issue. However, it would be difficult to accomplish this in real life. It is inevitable that design choices requiring more specific criteria will be made later in the software development lifecycle. It is not possible to accurately know this detail until that design choice has been made.

The decomposition of safety criteria must always be handled if the program is to be regarded as safe to use.

Requirements Satisfaction

The third software safety assurance principle is:

Principle 3: Software safety requirements shall be satisfied.

‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York.

It must be confirmed that a set of “valid” software safety requirements has been met after they have been defined. This set may be assigned software safety requirements (Principle 1), or refined or derived software safety requirements (Principle 2). The fact that these standards are precise, well-defined, and actually verifiable is a crucial need for their satisfaction.

The sorts of verification techniques used to show that the software safety requirements have been met will vary on the degree of safety criticality, the stage of development, and the technology being employed. Therefore, attempting to specify certain verification methodologies that ought to be employed for the development of verification findings is neither practical nor wise.

Mars Polar Lander was an ambitious mission to set a spacecraft down near the edge of Mars’ south polar cap and dig for water ice. The mission was lost on arrival on December 3, 1999.

Given the complexity and safety-critical nature of many software-based systems, it is obvious that using just one type of software verification is insufficient. As a result, a combination of verification techniques is frequently required to produce the verification evidence. Testing and expert review are frequently used to produce primary or secondary verification evidence. However, formal verification is increasingly emphasized because it may more reliably satisfy the software safety standards.

The main obstacle to proving that the software safety standards have been met is the evidence’s inherent limitations as a result of the methods described above. The characteristics of the problem space are the root of the difficulties.

Given the complexity of software systems, especially those used to achieve autonomous capabilities, there are challenges with completeness for both testing and analysis methodologies. The underlying logic of the software can be verified using formal methods, but there are still significant drawbacks. Namely, it is difficult to provide assurance of model validity. Also, formal methods do not deal with the crucial problem of hardware integration.

Clearly, the capacity to meet the stated software safety requirements is a prerequisite for ensuring the safety of software systems.

Software Safety Principles 2 & 3: End of Part 2 (of 6)

This blog post is derived from ‘The Principles of Software Safety Assurance’, RD Hawkins, I Habli & TP Kelly, University of York. The original paper is available for free here. I was privileged to be taught safety engineering by Tim Kelly, and others, at the University of York. I am pleased to share their valuable work in a more accessible format.

If you found this blog article helpful then please leave a review, below. If you have a private question or comments then please connect here.