Follow our illustrated blog on Embedded Software Architecture

How to build a hybrid solar/wind energy harvester?

Adopting MISRA-C guidelines in your software development process – best practices

Adopting MISRA-C guidelines in your software development process - best practices, 9.0 out of 10 based on 2 ratings
VN:F [1.9.22_1171]
Rating: 9.0/10 (2 votes cast)

Implementing functional safety by means of achieving a certain safety integrity level (SIL) is a matter of reducing risks. On the software side this results among other things in the adoption of a language subset, which often is a required SIL parameter. As C and, to a lesser extent, C++ are popular programming languages for developing embedded software, a number of subsets for these languages have been defined over the years. In this article we will look into a number of best practices to introduce the MISRA-C 2004 subset into your development process.

The need for such subsets arose from the shortcomings the C programming language faces: a syntax that is prone to foolish mistakes, yet resulting into perfectly legal expressions (e.g. the use of a single ‘=’ in a comparison); the lack of proper runtime checking and the incomplete ISO C standard, stating certain aspects as undefined or implementation dependent, which may lead to unpredictable behaviour according to W. Basalaj [1]. Without properly addressing these shortcomings, the C language as such is unsuitable for developing safety-critical applications (MISRA [2]).

The automotive sector was one of the first industries to recognize this need. A number of car manufacturers and suppliers, such as Ford and Jaguar, formed a consortium named MISRA, which stands for “Motor Industry Software Reliability Association”. Its goal is to give advice regarding the design and development of safety-critical software in an automotive context.

Guidelines

One of the results of this collaborated effort is the formulation of a set of guidelines, known as MISRA-C, that helps developers to write safe C code by circumventing the shortcomings described above. These guidelines are updated on a regular basis with the latest revision dating from 2004. L. Hatton [3] provides an in-depth comparison of the current 2004 revision with the previous 1998 revision.

MISRA-C consists of 141 rules, of which 121 are classified as required and 20 are deemed advisory. In order to claim MISRA-C compliant code, it must comply with all 121 required rules (with the exception of documented deviations, see later on) and the remaining 20 rules must be upheld as much as practically possible. Discussing the guidelines themselves is beyond the scope of this article. Please consult the MISRA-C guidelines[2] or the MISRA forum for an elaborate explanation of the rules.

Static code analysis

To uphold MISRA-C rules while writing code, a static code analysis tool is essential. It checks the code for violations of rules and is either standalone (e.g. PC-Lint/Flexelint, QAC, LDRA, …) or integrated in a compiler/development environment (Green Hills Software Multi, IAR Workbench, …).  P. Emanuelsson and U. Nilsson [4] conducted a comparative study on a number of static analysis tools. In either case, none of the tools that support MISRA-C are free, covering a large price range with PC-Lint being quite affordable and QAC being rather expensive. Even though the tools cover most of the rules, some rules can only be checked with a manual code review performed by a human being…

Selecting rules

Whether you’re starting from an existing codebase or from scratch, introducing the MISRA-C guidelines will bring along a number of technical challenges, as well as political resistance among the software engineers. After all, introducing coding rules means extra effort and constraints for the developers.

To overcome both, it is important to include the software engineers in the process. Discuss as a team which rules should be included, and which not; motivate why. Start by producing a compliance matrix as suggested by MISRA [2]. This matrix documents which rules should be upheld and how they should be checked. Rules that are omitted, are listed in here as well, together with a rationale. Deviations are allowed as long as a formal deviation procedure is in place. For more information on deviations and deviation procedures, see paragraph 4.3.2 of the MISRA-C guidelines [2].

Assigning priority

Once a selection of the rules has been made, assigning priority is in order. Especially when dealing with a large amount of legacy code, the large amount of defects found can be overwhelming. Ranking reported defects according to priority gives some overview and allows for a gradual introduction of the rules.

To set a priority to a given rule, assign a number of 0 to 2 to the following criteria:

  • probability – chance that ignoring this rule may lead to a bug:
    0:  very probable
    1:  probable
    2:  improbable
  • severity – not complying with this rule may lead to degradation of the code quality having the following possible consequences on your software:
    0:  crashing or at least ill-performing
    1:  difficulties regarding maintainability
    2:  difficulties regarding portability
  • cost of modifying the code to comply with this rule:
    0:  inexpensive
    1:  average
    2:  expensive

Add the three numbers together, the result signifies the order in which the selected rules should be applied.

Check the checker

As mentioned before, a good static analysis tool that supports MISRA-C is indispensable. However, as no certification exists, any tool could claim MISRA support. In order to verify the quality and configuration of your tool, you need to perform a number of test cases with regards to rule defect detection. Reported violations that are either false positives or false negatives are detrimental to say the least. False positives require precious time to fix defects that aren’t ones, may cause to overlook true violations and cause a negative impact on the willingness of your engineers to accept the extra effort. False negative violations are defects that the tool failed to report, leading to code that potentially contains bugs.

MISRA provides the MISRA-C exemplar test suite, a collection of test cases that allow to detect false negative and false positive defects in your tool’s reports. For the majority of the rules, one or more test cases are provided. Test your tool by analysing the test cases of the corresponding rules you have selected. Count the total amount of reported violations and classify them as true positives (tp) and false positives (fp) according to the expected output as documented in the test case. If a violation is missing, classify it as false negative (fn). Then calculate the precision and recall using the following equations:

precision = \dfrac{t_p}{t_p + f_p}

recall = \dfrac{t_p}{t_p + f_n}

Both recall and precision are a measure for the probability that a violation is respectively found and correct. If either measure is lower than 80%, verify the configuration of the tool or consider using another tool altogether.

Moving to a MISRA-C compliant codebase

When applying MISRA rules, fixing defects in your existing codebase inevitably leads to modifying your existing code. Needless to say this requires retesting your code. After all, bugs may have been introduced or behaviour may have been changed. Rerun all of your tests in which the affected code is involved, from unit-tests up to the integration tests.

Once your existing code is MISRA compliant, to maintain this state it is key to secure static code analysis in your development process. Configure your compiler to automatically trigger your MISRA checker upon compilation, or make it a habit to make sure your code is MISRA compliant before committing it to your version control system. If necessary, run nightly builds including MISRA checking and automatically mail a list with reported defects to the responsible developer.

Summary

In short, the best practices can be summed up as follows:

  • use and verify a static code analysis tool that supports MISRA-C
  • selecting rules as a team using a compliance matrix
  • assign priority to the selected rules (particularly relevant for legacy code)
  • retest your code after fixing defects
  • secure static code analysis in the development process

References

[1] W. Basalaj, “How to select a programming language subset to maximize software quality,” in Improvements in System Safety, F. Redmill and T. Anderson, Eds. Springer London, 2007, pp. 43-56.
[2] Guidelines for the Use of the C Language in Critical Systems, ISBN 0 9524156 2 3 (paperback), ISBN 0 9524156 4 X (PDF), October 2004.
[3] Les Hatton, Language subsetting in an industrial context: A comparison of MISRA C 1998 and MISRA C 2004, Information and Software Technology, Volume 49, Issue 5, May 2007, pp. 475-482.
[4] Pär Emanuelsson, Ulf Nilsson, A Comparative Study of Industrial Static Analysis Tools, Electronic Notes in Theoretical Computer Science, Volume 217, 21 July 2008.

VN:F [1.9.22_1171]
Rating: 9.0/10 (2 votes cast)
Share

Trackbacks

  1. […] or implementation dependent, which may lead to unpredictable behaviour according to W. Basalaj [1]. Without properly addressing these shortcomings, the C language as such is unsuitable for […]

Speak Your Mind

*