In my last posting I mentioned I'd be running a webinar with James Grenning on Agile testing. James is a recognized expert and frequent speaker on the topic of software development and one of the original authors of the Manifesto for Agile Software Development.
We talked about the case for agility where today's embedded software projects are inevitably faced with changing requirements and market conditions that cause unplanned, mid-course corrections. The result is what went in is often not what was expected to come out. Testing folks are the tail trying to wag the dog as they try to test in quality at the end of the project.
James made the case for iterative and incremental test-driven-develoment (TDD), continuous testing and continuous integration. By breaking projects down into smaller chunks that can be designed and tested, teams are more likely to hit their functional requirements with timely, quality software. In fact with TDD developers and testers work together to define the tests that will verify each chunk of functionality up front. This helps clarify what the feature will do (and what failure modes need to be verified) and gets the testing done early when the development work is fresh in the minds of the team. This avoids late cycle surprises and snowballing design errors.
We also reviewed some of the subtleties of applying TDD to embedded development. Testing needs to be done not just at the developers desktop but also cross-developed to simulators, reference boards and finally the device itself. Only when the software is verified on the actual device can you be sure you've got it working right.
To make all this work you need to be employing continuous integration methods and using automation technology. Rather than waiting till late in the cycle to integrate work from multiple developers or teams (software first then hardware and software together), you need to have an environment where you can do integration on a frequent basis — bi-weekly, weekly or even daily. Again, the goal here is early verification before problems get out of control and early feedback to keep the program on track.
The challenge with all this is how to make it work in the real world. How do you capture tests as they are developed and then reuse them on multiple targets for regression tests? How do you efficiently manage a range of targets from hosts to reference boards to real devices? How do you know how effectively you are actually testing the device? Given the frequent changes, how do you focus on that's new or changed rather than testing the same things over again? And lastly, with testers and developers working closer together, how do you enable this collaboration, particularly across globally distributed teams?
This is where automation comes in. We discuss how Wind River Test Management can help address each of these challenges by providing an embedded-centric, collaborative testing environment with a built in target device manager plus a dynamic analytics engine that can let you 'see inside' a device under test and provide timely accurate feedback and control.
The replay for this event is now on line. Click here if you want to see the actual webinar. Enjoy!