Automated requirements based testing from QA Systems Cantata with Wind River Workbench

Automated requirements based testing from QA Systems Cantata with Wind River Workbench

Poorly defined and implemented requirements can lead to expensive project rework. Requirements based testing helps to ensure that code does what was intended and its purpose has been defined to a suitable level of detail. This avoids expensive mistakes and aids customer acceptance.

Safety critical software projects use unit and integration tests as proof that requirements have been met. Safety standards such as ISO 26262 or IEC 61508 specify use of bi-directional coverage to ensure that:

  • All requirements have been verified as correctly implemented
  • All functionality is intended and is specified in the requirements

Requirements coverage differs from code coverage as it measures how many of the requirements have been verified, rather than how much of the code has been executed by tests. Achieving 100% requirements coverage can be very time consuming, repetitive and expensive. Using an automated testing framework can dramatically reduce the work and time spent on these activities. QA Systems’ automated unit and integration testing tool Cantata for VxWorks parses source code to automatically generate test cases that are easily matched with requirements (or vice versa – matching requirements to test cases). It also substantially reduces the effort of attaining 100% requirements coverage, making it quick and easy to identify gaps in requirements, bugs in the code and unintended functionality.

1

Requirements or test plans can be imported from any requirements management or ALM tool using ReqIF, xls, xlsx or csv formats, and are displayed in the Cantata Trace view in Workbench.

2a
2b

Cantata’s AutoTest capability can generate suites of passing unit test cases for the entire code base. These test case vectors exercise all code paths, using white-box access to set data, parameters and control function call interfaces. As well as exercising the code, tests check the parameters passed between functions, values of accessible global data, order of calls and return values. The level of detail in AutoTests can be set to meet a code coverage level (e.g. function entry point, statement, decision, MC/DC), and tests can be run on embedded target platforms as required by most safety standards.

The auto generated test cases can be viewed alongside the requirements and can be traced by matching them to requirements (or vice versa), using an intuitive drag and drop interface.

3

At this stage it is necessary to consider whether each requirement is completely and correctly verified by the test cases assigned to it. This step needs critical thinking so is not possible to automate. However, AutoTest makes this step is made much faster and easier, with an English description of the unique path through the code that each test case verifies.

4

AutoTest generates passing tests, so if there are bugs in the code the tests cases may not match the requirements. Once a bug is identified in this way and the code is fixed, creating a passing unit test is usually as simple as changing a parameter or return value in the Cantata GUI test case editor.

It is also easy to identify requirements that have not been implemented in the code because no tests can be traced to the un-implemented requirement.

There may also be test cases that do not match any requirement, in these cases a decision is needed as to whether the functionality is necessary. In this case a corresponding requirement can be written. Common causes of this would be defensive programming and system initialisation code that was not defined in the requirements. Alternatively, if the functionality should not exist the code should be edited and tests rerun.

Associations between tests and requirements are exported to the requirements management tool along with associated code coverage and test status.

As requirements change, managing the differences between versions / variants, and their relationships to existing tests can become a problem. In Cantata Trace, differences between requirement sets are highlighted, clearly showing new, changed and deleted requirements, and relationships with existing tests can be retained.

5

Once a correct set of passing tests has been traced to requirements they can be rerun automatically using a continuous integration tool such as Jenkins to provide an automated set of regression tests.

Cantata is independently certified for use in development of safety critical software to the highest safety integrity levels for all major software safety standards.  Certification ready functional test and coverage results are automatically produced in asci text format (directly from the target platform if testing on target) as required by safety standards.

Watch a video demonstration of automated requirements based testing using Cantata

cantata

Find out more about Cantata AutoTest automatic test case generation and requirements traceability