What the LA Lakers and Apple iPhone can teach us about software engineering

By Ido Sarig

Ido-headshot Half way into the 2010-2011 NBA basketball season, the Pacific division leading LA Lakers had an embarrassing loss to the bottom-dwelling Memphis Grizzlies. The LA Times described the performance of several of the players at the Sunday, Jan.2, 2011 game as “sleep walking” – which turned out to be not far from the truth!

Three Laker players had reported oversleeping the night before the game, because the alarm clocks on their iPhones malfunctioned that morning, part of a well publicized failure.

If this sounds familiar to iPhone users, it’s because it was a repeat of a nearly-identical episode, reported just two months earlier, when the iPhone alarm application failed to operate properly after the switch back from Daylight Saving Time to standard time.

What does this have to do with software engineering? It illustrates a well-known phenomenon in large-scale software applications, researched as early as the 1970s, when IBM performed a “distribution analysis of customer reported defects against their main commercial software applications” and were “surprised to find that defects were not randomly distributed through all of the modules of large applications” – or in plain English – some parts of the code are more buggy than others.

These “error-prone” modules, or “hotspots,” which commonly make up less than 10% of the modules, are often responsible for more than 50% of the reported defects. That’s actually good news for testers and developers, since if we can identify these hotspots during testing, we can focus our efforts on a few well defined areas of code, improve them, and quickly reduce the number of defects by a significant amount.

Wind River Test Management can help testers identify such problem areas in their application. Through integration with defect tracking tools, Wind River Test Management creates an association between a functional area (down to the individual file or even specific function within that file), a test case exercising that function, and the defects uncovered and reported by that test. It is easy to produce a report that lists problematic functional areas, as measured by the number of defects reported against them, and share it with the engineers responsible for the module.

Of course, it could be that the problem areas are ones that have been entirely neglected by your testing efforts – in which case Wind River Test Management’s ability to highlight the portions of device software that are not tested at all may come in handy…