The High Cost of Poor Quality – Brand, Market, Budget

I’m continuing my series on our embedded device industry software testing survey conducted in April-May 2010 with almost 900 respondents (see previous blog posting).  If you’d like a copy of the full report in pdf, please drop me an email at paul.henderson@windriver.comand I will send it to you.

In this section of the survey we asked participants about how they measure the high cost of poor quality. Respondents told us that the true cost of poor quality is much higher than program budget. The majority of respondents showed that the true cost of poor quality is measure by damage to company brand and lost revenue due to missed market windows.

Despite these risks, participants are suffering from repeated schedule slips due to late-cycle quality surprises, though a significant number make the decision to ship defective product anyway an fix it later. When they do ship defects, companies are finding the cause is not due to standard features that they tested for, but instead from unanticipated uses of the product, which naturally were not tested.

Substantial Budget Devoted to Testing, Yet Late Programs

Participants were asked about how much of their total project total resources are spent on testing. More than 77% of the respondents reported that testing activities typically consume over 20% of their total project budgets.  Nearly 21% reported testing costs totaling over 40% or their total project budgets.

Participants were asked what percentage of their overall test budget is allocated for testing maintenance releases and software upgrades.  A large majority (74.8%) said that they dedicate more than 10% to this lower profile but still evidently important testing activity.

While 20-40% of project budget for testing is significant, the responses subsequent questions point out that this hard cost of the testing department resources is only a small part of the cost of quality – or rather the cost of poor quality.

Participants were asked about how poor quality was effecting their development cycle and how they responded to and measured this effect. Seventy two percent (72%) of respondents reported that over the last year their projects had been disrupted due to late-cycle discovery of critical software quality problems. The frequency of these disruptions was surprisingly high, with most respondents that had problems reporting that this happened more than once per project in the last year.

As to the question of how they handled these issues, 57.6% of those that experienced disruptions said that they delayed their products’ ship dates in order to fix the problems and retest the products.

Unanticipated Usage the Most Common Cause of Defects

When they did ship products with software defects that were discovered later by customers, it was most often the unusual and unanticipated use that exposed the problems:

1)            Products used in unanticipated ways (58.1%)

2)            Untested edge conditions (48.9%)  

When asked how their companies measured the cost to the company when products were not delivered at planned quality, the top three measurements were significant business drivers:

1)            Higher than expected program costs

2)            Damage to brand due to shipped defects

3)            Missed market window for the product

Next time I will review respondents’ feedback on what test tools they are using today and where they are investing for new test automation.