Archives de catégorie : E Learning DO-178C/ED12C

Extraneous Code

A new term appears in the structural coverage analysis resolution section (§6.4.4.3): Extraneous code, which is an extension of “dead code”. The idea is to consider all code (or data) that is the result on an error, whatever this code may be or not exercised. The definition of dead code is limited to the executable object code that cannot be exercised. Extraneous code includes dead code, but also all pieces of code, found at source or object code level, that may be exercised or not.

The fundamental idea remains unchanged. This code, executable or not, should be removed. However, it is now allowed to keep this code as long as it is demonstrated that (1) this extraneous code does not exist in the executable object code, and (2) procedures exist to prevent their inclusion in future software releases.

Here is the definition of “extraneous code” and the new definition of “dead code” providing more examples on exceptions:

Dead code – Executable Object Code (or data) which exists as a result of a software development error but cannot be executed (code) or used (data) in any operational configuration of the target computer environment. It is not traceable to a system or software requirement. The following exceptions are often mistakenly categorized as dead code but are necessary for implementation of the requirements/design: embedded identifiers, defensive programming structures to improve robustness, and deactivated code such as unused library functions.

Extraneous code – Code (or data) that is not traceable to any system or software requirement. An example of extraneous code is legacy code that was incorrectly retained although its requirements and test cases were removed. Another example of extraneous code is dead code.

Parameter Data Item (Configuration files)

Parameter Data Item is a new topic. It  is intended to address the possibility to produce and modify some configuration tables or databases separately from the executable object code. The guidance is applicable when such data is modified and the executable object code is not reverified.

The new text identifies two keywords “parameter data item” (general word) and “parameter data item file” (the executable representation of the PDI):

Parameter data item – A set of data that, when in the form of a Parameter Data Item File, influence the behavior of the software without modifying the Executable Object Code and that is managed as a separate configuration item. Examples include databases and configuration tables.

Parameter Data Item File – The representation of the parameter data item that is directly usable by the processing unit of the target computer. A Parameter Data Item File is an instantiation of the parameter data item containing defined values for each data element.

As the PDI file is separate from the executable object code, throughout the document multiple changes were performed to replace “executable object code” with “executable object code and Parameter Date Item Files”, as all considerations on generation, identification and management of the executable object code are applicable to the Parameter data Item Files.

The use of a PDI impacts all the processes:

  • A new section is added in the “Software Considerations in the System Life Cycle Processes” to highlight the possible impact on this approach on the system
  • During the planning process, processes applicable to PDI should be defined and described in the plans, in particularly in the PSAC, as additional considerations. The software load control and compatibility aspects should be also addressed
  • PDI is subject to High Level Requirements development. These requirements define the structure, attributes and (when applicable) the values. This is often called “usage domain”. The choice to consider these data as HLR and not LLR is to make the guidance applicable to level D software.
  • In the integration process, the PDI files are generated
  • Of key importance is the new section 6.6 on the verification of the PDI. This section defines under which conditions the verification of the PDI may be conducted separately from the executable object code. These conditions are tied to the coverage of the executable object code verification Demonstration that the executable object code is able to handle the PDI values inside the limits provided by the PDI HLR, and to be robust against invalid structures and/or attributes, needs to be provided.
  • The verification objectives on the PDI file itself, conducted separately, are defined, and summarized in the table A-5. The first objective is to verify that the PDI file is compliant with its HLR (structure, attributes), and that it doesn’t contain any unintended element. This objective includes also the verification of the correctness and consistency of the element values (not only that it is in the range defined in the HLR). The second objective is to verify the completeness of the verification.

It should be noted that the PDI file is identified as Software Life Cycle Data (§11.22), and is also the topic of a discussion paper (DP#20) in DO-248C/ED-94C, providing clarifications and examples.

Verification of additional Code

Under certain conditions DO-178B/ED-12B section 6.4.4.2.b required the applicant to verify the traceability between Source Code and Object Code. In case of non-direct traceability, additional verification activities were expected in order to demonstrate the correctness of the generated code sequences. There was nothing about this in the A-7 objective table.

This DO-178B/ED-12B text and the associated expectations have often been misunderstood. Therefore, clarifications were needed and are now incorporated in DO-178C/ED-12C:

  • In DO-178B/ED-12B, the wording “the analysis may be performed on the source code, unless …” seemed to suggest that the adequate level was the object code, which was not the initial intent. The text has been updated: Structural coverage analysis may now be performed at any level, i.e, source code, object code or executable object code. It is up to the applicant to choose the most appropriate level.
  • Independent of the form of the code used to perform the structural coverage analysis, if the software level is A, an analysis of the code produced by compiler/linker or other tools used to generate the executable object code needs to be conducted. If such tools generate additional code sequences that are not directly traceable to source code, additional verification should be conducted.
  • The meaning of the words “direct traceability” is now clarified in a note. It explains that “branches” and “side effects” should be considered.
  • The table A-7 reflects this change. A new objective, applicable only to level A is added.

It should be noted that compared to DO-178B/ED-12B, there is no extra information in DO-178C/ED-12C regarding the exact nature of this “additional verification”. The guidance remains focused on the general verification objective consisting in establishing “the correctness of such generated code sequences”.

Regarding this additional verification, DO-178B/ED-12B and CAST paper #12 were clearly limited to the compiler effects. The new text in section 6.4.4.2.b brings the other generation tools in the game. Therefore, the activity may no longer be limited to traceability analysis between source code and object code but may also need to consider the effects of all tools used in the Executable Object Code Generation chain

Structural Coverage Analysis and data and control coupling

Two concerns were identified on this topic: Structural testing (i.e., testing based on the code structure) and data and control coupling.

On the first concern, an additional bullet (d) is added in section 6.4.4.1 (Requirements based test coverage analysis): This bullet provides a hook from “A Requirements Based Testing” dedicated section to the “Structural Coverage Analysis”. It is now clearly explained that only the tests based on requirements are valuable for structural coverage analysis, and an analysis may be necessary for demonstration:

For data and control coupling, it was necessary to re-affirm that objective A7-8 is not a verification of the data/control coupling. Data and Control coupling are defined in the design data as part of the architecture. Verification of this architecture, including interfaces between components, is part of the verification of the outputs of the design data (table A-4). Compliance of source code to this architecture is also verified as part of the verification of the source code (table A-5). Objective A7-8 is related to the structural coverage analysis, and thus to the verification of test data. Therefore, the activity needed to satisfy the objective consists in analyzing how well the requirements-based tests fully exercised the coupling between the components.

To emphasize the above clarification, the introduction text in the Structural Coverage Analysis section (§6.4.4.2) now lists the “interfaces between the components” as an input of the Analysis.

An example of data and control coupling is provided in the modified FAQ#67 of DO-248C/ED-94C. This FAQ also identifies the typical test cases that should be developed to satisfy the 4.4.1.d objective for this example.

Robustness

In DO-178B/ED-12B, robustness testing was sometimes misinterpreted as additional tests that supplemented requirements-based tests. This is clarified now and definitely states that all tests, normal and robustness, should be requirement based.

A note is added in the section §6.4.2 on requirements-based test selection: « Robustness test cases are requirements-based. The robustness testing criteria cannot be fully satisfied if the software requirements do not specify the correct software response to abnormal conditions and inputs. The test cases may reveal inadequacies in the software requirements, in which case the software requirements should be modified. Conversely, if a complete set of requirements exists that covers all abnormal conditions and inputs, the robustness test cases will follow from those software requirements »

To be more flexible, it is also recognized that some mechanisms as described in the standards may also be used to improve robustness. So, implicitly, some robustness tests should be developed to assess the correctness of the implementation of these mechanisms.

Additional text is also provided in FAQ#32 in DO-248C/ED-94C (What are defensive programming practices?). This FAQ makes a connection between programming practices and robustness but explains that programming practices don’t supersede the need for requirements specifying the correct software response to abnormal conditions and inputs.

WCET and Stack analysis

WCET (Worst Case Execution Time) and Stack analysis were identified in DO-178B/ED-12B as part of reviews and analysis of the source code verification process (objective “accuracy and consistency” in 6.3.4.f). But time and memory assessment might not be achieved only through reviews and analysis of source code.

Limited additions were made to try to address this concern.

  • In §6.3.4.f, a sentence is added, requiring that compiler, linker and hardware be assessed for impact on WCET.
  • In the introduction to the section on software reviews and analysis (§6.3), it is also identified that reviews and analysis alone may not completely satisfy some objectives (e.g. WCET, stack analysis) and that some tests may be also necessary.

The FAQ#73Are timing measurements during testing sufficient or is a rigorous demonstration of worst-case timing necessary?was reworked to provide a complete discussion of this topic, but the revision was editorial in nature and doesn’t provide additional information.

Deactivated code

Deactivated code was mainly addressed in DO-178B/ED-12B as something to be considered during the integration process and while resolving the structural coverage analysis issues. This was not fully consistent as the “means to ensure that deactivated code cannot be enabled in the target computer” was expected as part of the design data (§11.10), while no guidance was provided in the Design Process.

An approach leading to keep some deactivated code in the final software should be considered much earlier in the project, during the planning process of course, and then in the early phases of the development processes.

As a result, a new approach is introduced in DO-178C/ED-12C, starting with an enhancement of the definition, providing examples of what is and what is not deactivated code. The definition also clearly states that the deactivated code is really intentional, as it is traceable to requirements.

An activity description (new section §5.2.4) was added in the scope of the design process. This new section highlights the need to design and implement a protection mechanism, and also to develop the deactivated code in the same way that the rest of the code.

In the section about structural coverage analysis resolution, the “two” categories of deactivated code are clarified. Particularly for deactivated code that is not intended to be executed in any configuration, the text opens the door to develop this code at a lower software level, and/or to alleviate the verification activities on this code. See §6.4.4.3

Traceability and trace data

From a pure traceability standpoint, one of the main changes brings by DO-178C/ED-12C is to consider the “trace data” as new software life cycle data. However, what these trace data should look like is not specified, as their definition in the glossary allows multiple formats to be used: The traceability linkages may be shown with different techniques: « Data providing evidence of traceability of development and verification processes’ software life cycle data without implying the production of any particular artifact. Trace data may show linkages, for example, through the use of naming conventions or through the use of references or pointers either embedded in or external to the software life cycle data »

Trace data should be used wherever it is necessary to establish an association between two life cycle data items. The new wording requires bi-directionality of this association: §11.21

Trace Data establishes the associations between life cycle data items contents. Trace Data should be provided that demonstrates bi-directional associations between:

  1. System requirements allocated to software and high-level requirements.
  2. High-level requirements and low-level requirements.
  3. Low-level requirements and Source Code.
  4. Software Requirements and test cases.
  5. Test cases and test procedures

Another clarification is to explicitly require that rationale for derived requirements be provided during the development process. The wording “reason for their existence“ has been added in the activity description, in addition to the analysis. This rationale should also be passed to the system process together with the requirements themselves.

Derived requirements

First of all, the definition of derived requirements was updated in DO-178C/ED-12C, the focus being more on the content of the requirements rather than on the traceability aspects: Derived ReqIt is now considered that some “traceable” requirements can be identified as derived because they specify behavior beyond that specified in higher level of requirements. It should be noted that this does not really change the previous definition, as the term “may be not traceable” already opened the door to the same interpretation. FAQ#36 of DO-248C/ED-94C was reworked to provide examples of the two “classes” of derived requirements.

A correct application of this concept (derived requirements) requires good experience and maturity within the software engineering team: The purpose of the traceability feature is both to enable the verification of the complete implementation of the higher level of requirements and give visibility on the derived requirements, as now clarified in section §5.5 (new). Therefore, beyond the accurate definition/identification of the derived requirements, it is very important to define a traceability approach that actually supports and complies with the above purpose.

DO-178C/ED-12C products

DO-178C/ED-12C « suite »is coinstitued by seven documents:

  • DO-178B/ED-12B itself; the revised version is DO-178C/ED-12C.
  •  DO -278/ED-109: This document is applicable to ground-based systems (CNS and ATM software). This kind of software is not airborne software but may have an impact on safety. Before DO-278/ED-109, application of DO-178B/ED-12B was requested, but some ground software-specific needs had to be addressed, mainly the extensive use of COTS software.
  • DO-248B/ED-94B: This document provides clarification of DO-178B/ED-12B. It also contains rationale for each DO-178C/ED-12C objective.
  • Three supplements: A basic principle of DO-178C/ED-12C and DO-278A/ED-109A is to be technology independent. The current state of the art in software engineering clearly includes techniques that are useful in developing airborne or ground-based systems and thus should be addressed by the SCWG, but expanding the core text of the two documents would not have been a practical approach. Instead, SCWG recommended preparing one or more supplements to address several new specific techniques. Used in conjunction with DO-178C/ED-12C and DO-278A/ED-109A, these supplements would amend the guidance to account for the new software technologies. In the scope of this SCWG, three supplements were developed:
    • DO-331/ED-216: Model Based Development and Verification supplement
    • DO-332/ED-217: Object Oriented Technology and Related Techniques supplement
    • DO-333/ED-218: Formal Methods supplement
  • Tool Qualification Document: The Tool Qualification guidance in DO-178B/ED-12B had to be revised, as it was deemed unnecessarily difficult to apply and not sufficiently detailed to address tool specifics . The nature of the guidance to be provided for tool qualification does not fit with the concept of a supplement, since it not only amends the core guidance but also constitutes a complete and standalone set of recommendations, objectives, and guidance. In addition, it was recognized that the guidance for qualifying a tool should not be limited to the airborne domain. Based on these considerations, a completely new document was developed for tool qualification, and is domain independent.