Archive | November, 2016

“How To Fail at Aviation Software Development” (Then Hopefully Succeed!)

1 Nov

Too often “How to” guides present lofty goals which seem desirable, but at the same time are seemingly unattainable. This paper is different. An unhealthy software development diet is just like a poor nutritional diet: seemingly good tastes can become bad habits and result in real harm to health. But what ingredients go into an unhealthy airborne software diet?  Understanding and addressing airborne software failures is the first step towards better avionics health.  Remember: with knowledge, training, discipline, and practice, software engineers – just like athletes – can become winners.  The following is an excerpt from a new paper just written by Vance Hilderman of AFuzion (with Joao Esteves of CRITICAL Software); to download the entire paper, just visit http://www.afuzion.com and request a free download.  Free Technical Whitepapers, including “How To Fail”

 

 

DO-178C describes the integral processes spanning the software development lifecycle. Up to 71 formal objectives are summarized which cover the full range of software engineering activities, beginning with planning and ending with certification.  While some of these objectives are self-evident (for example, ‘develop requirements before design’, ‘develop design before code’), others are more nuanced.  Returning to our metaphor, the parallels between software health and human health abound: everyone knows that reducing fat and sugar intake while engaging in modest exercise yields health benefits.  But how can you identify the most cost-effective and healthy foods? And, when it comes to exercise, what types and frequency provide the greatest return on time while minimizing the risk of injury? DO-178C says nothing about reducing schedule, cost, or risk yet these things are paramount to the success of any avionics project.  95 of the world’s 100 most successful avionics companies have hired the authors of this paper (CRITICAL Software and Afuzion) to prove or improve airborne software health. Drawing on this experience, this paper explores the ingredients that contribute to BAD avionics software health and how to avoid them, in order to better understand the healthy practices that lead to successful airborne software development.

 

BAD Software Health: The Best (or Worst) Ways to Fail …

DO-178C’s 71 objectives are interspersed across ten categories of avionics software engineering activities. Failures can literally occur within any of these objectives or activities. But which objectives are most frequently misunderstood and which activities yield the greatest risk of failure? This paper summarizes the best (or worst!) ways to fail within each of DO-178C’s ten process categories. That’s right! Each of DO-178C’s ten process categories can be associated with major failures.  The secret is knowing what the potential failures are so that you can formulate advanced strategies to avoid them while maximizing your chances of success.  And, of course, maximizing success means minimizing risk, cost, and schedule times while simultaneously maximizing quality.  You’ve no doubt heard of the saying:  “You can have it fast, or cheap, or with high quality … so pick one.”  However, successful avionics projects truly require all three:  cost-effective and fast development processes, with acceptably high standards of quality.  Here, avoiding the common mistakes within each of DO-178C’s ten software engineering activities is paramount to success.

 

DO-178C’s Ten Process Categories

So we know that DO-178C covers ten categories of software engineering activities.  These ten categories and the common mistakes associated with each are depicted below:

DO-178C’s Top Ten Failures vs Activities

 

  1. Planning to Fail

Aviation, like athletic training, is all about preparation and planning. Before an aircraft takes off, a flight plan is conceived of, detailed, and then filed.  Similarly, a competitive athlete makes a training plan and smart, successful athletes use nutritionists and trainers to help them.  Success requires planning, and no one intentionally plans to fail. However DO-178C requires an advance planning activity yielding five detailed plans plus three standards; the planning recipe requires sufficient detail on the required ingredients of “what”, “who”, and “when” without too much detail on “how”.  As mentioned, avionics software health resembles human health in many ways: many factors come together to contribute to the overall state of health and, for avionics, those factors must be summarized in the five plans and three standards.

 

 

Some athletes find that despite their seemingly well-prepared exercise plans, their improvement quickly fades along with their dreams of success: and that their exercise plan simply lacks the ability to strengthen muscles.  Why?  Typically, the answer is that they are missing crucial details about diet, intensity and the duration of exercise and, most importantly, how to measure success.  Aviation is the same. It is all too easy with DO-178C to draft high-level plans which do not sufficiently define the key detailed aspects of your avionics system necessary for successful certification. Common aspects missing from DO-178C planning documents include:

 

  • Use of previously developed software. If used, how will it be certified or brought up to compliance standards?
  • Use of configurable software, where users may “reconfigure” software after certification by modifying off-board configuration data. It is imperative to follow DO-178C’s explicit Parameter Data Item (PDI) criteria for such activities.

 

  • Application of DO-330 for Tool Qualification, and identifying the full suite of software engineering tools planned for use then defining where the outputs of each will be verified. If verification is insufficient, formal DO-330 Tool Qualification will likely be required.

 

  • Application of Model-Based Development as per DO-331 and, when used, a brief synopsis of applicable Specification Model standard versus Design Model standard (these cannot be the same standard: like high-level requirements and low-level requirements, the Specification Model and Design Model are distinctly different and require sequential verification, in that order).

 

  • Description of the means to analyse data and control flow coupling. This coupling analysis is about more than checking a box during a code peer review – it requires proper consideration of software design and the interfacing modules/data. Recent third-party tool improvements will help with coupling analysis but your plans must summarize coupling analysis activities via verification engineers.

 

  • Regression analysis: all projects have changes and the means to verify those changes depend on regression analysis. The processes used by verification engineers for regression analysis and retesting must be sufficient for the plan reviewer to assess adequacy – experienced verification engineers know how to apply the tools and techniques to semi-automate this process to ensure it’s done right, first time.

 

 

By considering the above common mistakes within your “avionics exercise plan”, your project has a chance to win the gold medal at the first attempt.

 

 

  1. The Requirements of Failure

 

The world’s leading software experts agree: most software defects are due to defective requirements.  Yet DO-178C provides scant details for ensuring that you have great requirements.  Yes, you could hire or outsource your development to world-class avionics developers, but what can you learn from them about developing great avionics software requirements?  First, experts know that the successful longevity of DO-178 is due in no small part to its careful balancing of 71 deterministic objectives versus flexibility on how these 71 objectives are met.  Yes, DO-178C could include many more pages of suggestions on how to improve software requirements:  but they would be just that – subjective suggestions. Consider that avionics projects span a vast variety of domains, complexity, criticality, and size. No single “How to” guide would suffice.  Instead, DO-178C uses structural coverage analysis to assess requirements:  for Design Assurance Levels (DALs) where people could be seriously injured or worse (beginning with DAL C), software structural coverage is required.  Simply put, software structural coverage objectives exist for multiple purposes, including assessing the degree of software structural coverage obtained when performing requirements-based functional tests.

 

In Brooks’ 1975 book Mythical Man Month, it is revealed that a leading cause of software defects is “assumptions”, and that these assumptions are often the result of weak requirements.  The first version of DO-178 was developed soon thereafter, fully cognizant of Brooks’ writings.   When a software developer cannot explicitly understand a desired result (“software output”), that developer is likely to make implementation assumptions.  Of course, the developer should instead strive to achieve an improved requirement. Yet developers are much more likely to simply power on and do what they think best, not necessarily what is right.  How do good avionics managers avoid this?  They employ the following best practices:

 

  • Utilize detailed software requirements standards which are reviewed at SOI #1, and which include examples of good, and not so good, high-level requirements and low-level requirements.

 

  • Have software testers review the requirements BEFORE the developers see them. Those testers then try to write deterministic test cases from those requirements without making any assumptions; where that is not obviously clear, feedback their questions to the requirements development process to yield improved requirements.

 

  • Use software modelling where systems engineers and software engineers use a shared-modelling platform and formal language (SCADE, SysML, UML, etc.). This shared language minimizes the very assumptions endemic to weak requirements.

 

 

  1. V = R + T  + A

 

Yes, engineers love equations!  The above equation is clear, but why is the “A” so small?  Again, it’s about health. Specifically, healthy verification of software requirements.

 

But is avionics software health really as simple as following an equation?  Almost.  But the equation must be properly understood:

 

V = R  + T  + A

 

Verification  =  Reviews  +  Tests  +  A Very Small Analysis

 

That’s right, in DO-178C, verification is performed via a combination of reviews, tests, and analysis:

 

  • Reviews: virtually everything is reviewed.  Plans, standards, safety, requirements, design, code, tests.

 

  • Tests:  requirements and code are tested via ground-based executable tests of flight software.

 

  • Analysis: when the above combination of reviews and tests does not completely satisfy DO-178C’s verification objective, additional analysis must be performed.

 

The verification equation is best satisfied when the requirements are sufficiently detailed that two independent verifiers could achieve equivalent assessment results.  Remember:  verification does NOT directly improve software. The goal of verification is to assess the software, specifically if the objectives of the DO-178C compliant plans and standards were fulfilled.  Actual software improvement then comes via the feedback process which then feeds into improving the requirements, design, and software.

 

It is imperative that requirement granularity be sufficient to devise robustness tests based on those requirements. Robustness should answer the question “does the software behave consistently and deterministically under less benign conditions of error values, boundary values, performance testing, illegal state transitions, off-nominal timing values, etc.?  In other words, the “rainy day” scenarios.   After robustness testing, DAL C, B, and A verification includes “white-box” tests, where actual software attributes and structural coverage are assessed. One objective is to assess the absence of dead code (code which has no reason or requirement to be there and which could and should be removed) and the sufficiency of mechanisms to ensure non-execution of deactivated code.  Experienced systems engineers elicit detailed requirements and expert avionics testers are adept at devising test cases to assess software/requirements robustness and dead/deactivated code.

 

Voila:  V  =  R  +  T  +  A

 

 

 

  1. Designing Software Failure

 

Design is like that distant uncle you see only once a year at Christmas or Thanksgiving. If you span across the entire life cycle nothing is as forgotten as design is. People often fail at requirements but plenty has been written and debated in conferences and training sessions about the importance of them. The same is true for V&V and testing.  Code has always been the gravity center – more emotionally than in terms of its actual impact on quality output.    What is left over and abandoned then? Design!  Indeed, design is often treated as a distant uncle and not a very wealthy one. And to some degree, agile processes (which may be helpful when well applied) are kicking design even further from the road. That is why frameworks and standards like AUTOSAR exist. It is not necessarily just to enable modularity, integration and reusability, but also because most people are bad at design and do not realize just how bad they are. Therefore, DO-178C applied properly can provide a useful framework for acceptable design. In the past, V&V was the subject of software developers’ bigotry – a set of activities left to those that were not sufficiently skilled at programming and were therefore relegated to the back seats in IT’s second class – but for a few years now, even non-safety-critical industries, such as banking, have started to view V&V as very important, explicitly procuring such services.

 

Back to our metaphor. Winning athletes focus on optimizing their capacity for physical movement, knowing that they cannot fundamentally alter their body’s “design”.  In athletics, the design of the human body is the ultimate equalizer: most bodies have fundamentally similar “designs”. Not so in avionics:  each system has an implementation customized to its particular design. Avionics software developers thus depend upon a robust and flexible design to yield the necessary consistency and determinism while affording the possibility of future system evolution. So what are the causes of failure in the design of avionics systems?  Over the past 25 years, the engineers at CRITICAL Software and AFuzion have worked on 300+ avionics systems – here are the most common design techniques observed which can result in failure:

 

  • Reusing prior designs which are poorly understood or documented. If you are inheriting such a design, first document it, then analyse it to see if it is even worth modifying. Like modifying an old or decrepit building, it may be more cost effective to simply start over, the right way.

 

  • Failing to have, and verify conformance to, a software design standard that specifies:
    • Details for all external and internal interfaces, including full bit patterns, encoding rules, use of any variant records, and source and destination information for all data items (remember: include internal interfaces).
    • Rules for defensive design/coding
    • Health monitoring details
    • Task prioritization schemes
    • RTOS and BSP usage and limitations plus a reference to allowable API’s (those designated “DO-178C certifiable”).
    • Rules for controlling coupling, which provide sufficient details to perform subsequent coupling analysis.

 

  • Failing to encapsulate hardware dependencies to yield portability.  Remember, for successful avionics programs, the question is not “Will it ever be modified?” but rather “When will it be modified?”. Most successful avionics systems are eventually updated with newer hardware and increased functionality.  Only by pre-planning future portability can such future upgrades be accommodated.  Plan for common core functionality which should not be changed and partitioned away from hardware-specific functionality which is then encapsulated within its own modules. Minimize the scope of change so that the vast majority of software modules, objects/classes and design elements can remain unchanged and unscathed. 

 

Bad design is also the key source of V&V difficulties when it comes to incremental testing, data and control flow analysis, effective development of test stubs and many more issues. What is strange is that one will often see people pointing fingers at the requirements or V&V team without realizing that disastrous design is actually the fundamental cause.

 

 

  1. Encoded Code

 

DO-178C requires that the “software language for humans to program avionics computers” is clearly understood by developers, reviewers, testers, and auditors. In other words, that it is absolutely NOT “encoded”.  The software languages used to program avionics computers must be common languages subject to defined compilation rules. Virtually any software language can be used (C, C++, Java, Assembly, Ada, Jovial, etc., none of which ever require compiler qualification or validation). However, for DAL A, B, and C there must be a defined coding standard. That coding standard must restrict unsafe constructs and operations while ensuring determinism, and the resulting source code must be verified against the designated and approved coding standard.  In avionics software development, the assumption is “guilty until proven innocent”.  So what are the best ways to fail at the level of code?

 

  • Failing to select a software language which has a coding standard that is accepted as safe by the aviation community. C, C++, and Ada are the most commonly used aviation software languages and all of them conform to safe coding standards.    

 

  • Failing to perform static code analysis prior to exiting the coding stage. While not required by DO-178C, static code analysis is performed via various commercial third-party tools. These tools can be formally qualified (see DO-330 Tool Qualification above), in which case the code peer review may not be required. However, even unqualified tools are usually better (and exceptionally faster) at finding hundreds of types of the most common software coding errors committed by humans.

 

  • Failing to define complexity metrics within the project’s required software coding standard and failing to enforce them. Complex code (measured via cyclometric complexity) is a harbinger of disaster. However defining in a way that is “too complex” is a subjective judgement and so such a definition is not provided within DO-178C. That does not mean that any project is free to have overly complex code. Quite the opposite:  complexity metrics must be defined and properly enforced.

 

  • Failing to realize that DO-178C’s coupling analysis is not solely performed by parsing or reviewing source code. Coupling analysis requires the consideration and analysis of software design and interfaces, not merely local source code reviews.

 

  • Failing to read between the lines of DO-178C to understand that there are six inputs required for software code reviews:
    1. Source code
    2. Source code checklist
    3. Software coding standard
    4. Software design
    5. Software requirements
    6. Trace matrix showing which software requirements are allocated to the source code under review

 

Each source code review must identify each of the above inputs, along with the artefact version identifier used for that review.

Hope you enjoyed the first half of this free technical whitepaper “How To Fail at Aviation Software Development (and How To Succeed!)”  To download the full paper, just visit AFuzion in the above referenced link and request a free download.

For information on public or private DO-178C training classes, visit AFuzion at: Public and Private DO-178C Training Classes:

For a fun 1-minute video “What is AFuzion” click here: What is AFuzion? Fun 1-Minute Video: