By Paul Henderson
Here's the next installment in my series on our embedded device software industry testing survey conducted in April-May 2010 with almost 900 respondents (see previous blog postings). If you'd like a copy of the full report in pdf, please drop me an email at email@example.com I will send it to you.
In this section of the survey we asked participants about how they measure software quality today, the metrics most often cited by survey respondents were reactive in nature such as tracking customer-reported failures and open defects rather than metrics that can help them prevent defects.
Survey participants clearly indicated that they have a strong desire to be more proactive, but their responses also reveal a significant disparity between their intentions and their actual testing. The survey uncovered a significant disparity between testing goals and reality (as measure by test coverage analysis), where a minority of respondents actually have access to the information they need to assess quality. Respondents further report that often the software quality information they do receive is later proved to be wrong, leaving them to make product readiness decisions on information that turned out to be inaccurate.
Respondents seem to know that you can't fix what you can't measure. They also know that the answer lies in more thorough testing done more often. But the common metrics in use, especially the customer-reported problems, appear to count how many “horses have already left the barn.” The disparity between testing goals and actual results, the management blind spots regarding software quality levels, and the decision making based on bad information are all contributing to a crisis in confidence in software quality in the embedded industry.
Metrics Not Providing Management Confidence
When asked if their exiting tools give management sufficient software quality information to make release readiness decisions with confidence, 75% of respondents said that they were at best only “somewhat confident.” With 19.1% of respondents, their uncomfortable answer was flatly “No, not confident.”
It seems that a high percentage of participants have learned to be skeptical about the software quality information upon which they must make decisions. Of those that responded that they knew one way or the other, 57% said that they have made release readiness decisions based on inaccurate or insufficient software quality information.
One telling group of data points is the responses to a question about which metrics respondents use to measure the software quality in their products. By far, the two most frequent answers were:
- The tracking of the number of open defects
- The numbers of defects reported by customers
Notably, both of these measures are reactive; they essentially gauge the number of defects already introduced into their products. The other, less-used metrics include test coverage metrics, numbers of tests executed, etc. were more proactive measures that can be controlled and managed by the companies.
Inadequate Test Coverage Information
Their lack of confidence and reactive management approach can also be linked to the low use of test coverage and code coverage tools. These tools provide insight into whether software was actually tested (or not) and how thorough test suites are. They should be used in both unit testing at the developers desktop as well as in the functional system testing in the QA lab.
Overall only a minority of participants much use of code coverage and test coverage tools. This is evidenced in respondents' answers to the question as to whether they track which parts of their code is actually exercised by their tests. Over 60% said they either don't track test coverage at all or only do so during development unit testing. Only about 32% said they measure and track what code is exercised in system tests of the fully integrated device.
Perhaps the most telling response in this section of the survey was in response to questions about test coverage goals and actual test coverage results with their current projects. Nearly 65% of participants reported having goals for test coverage of their code bases of over 70%. But only 35% of respondents believe they are at greater than 70% code coverage with the project they are currently working on.
Next time I will cover respondents' feedback on the high cost of poor quality and how they measure this.