Android software quality: it’s all in a single number

By Chris Buerger

Chris_bio_pic_2 Picture this: you are responsible for ensuring that the quality of the Android device you are about to ship will hold up in front of your customers. You are responsible that it will meet the often elusive 'commercial quality' goal. And it doesn't matter whether the Android software is being released as an initial software load on a new class of device or as a maintenance release for a device that is already in market. Also, ignore for a moment whether you sourced the Android software from your hardware Original Device Manufacturer (ODM) as part of a package deal or whether it represents significant investment into internal engineering budget.

The basic question to answer is simply: 'Is this software ready to be released?'. Or casting the net wider, will the software be good enough to stay below the return targets for the device? Will it be so good to act as a differentiator vs. your competitors and therefore help to meet your customer acquisition goals? And, if the software goes out to devices already in market, will it keep its users as engaged and satisfied with your brand as the prior device?

Big questions! Typically, a quality assurance process and the execution of an Android release-specific device test plan aim to assist management staff in answering them.

The test plan/process itself, of course, can be manual or automated or both. It can have performance, reliability, stress, soak, aging and compatibility test components and phases. There are many variations in any Android test approach, reflecting the priorities of the device owner and the use cases the device is meant to enable. However, one big challenge weaves itself like a red thread through this tapestry of Android software quality validation.

Here it is. Every method yields thousands, and if automated testing is included, tens of thousands of individual test results that become relevant in the process  to provide an answer to the single question on whether the device software is ready to ship. In reality, however, the sheer volume of test results creates overwhelming complexity. In essence, the challenge then becomes to reduce the test result complexity to a point where a single Go/No Go decision can be made with confidence.

This challenge is addressed in a new Wind River Android test innovation that I am proud to announce as an integral part of our new FAST for Android 1.6 release: the Wind River FAST score for Android. This composite score creates a single number to reflect the quality of the Android stack on the Device Under Test (DUT) when compared to industry-standard benchmark devices such as as Android Development Phones (e.g. Nexus S).Based on a Wind River developed algorithm, the FAST for Android score extends familiar concepts of single digit based quality scoring such as the ITU's VoIP E-Model to Android. The scoring mechanism is quick and repeatable, perfect for regression testing scenarios. It resolves the real-world challenge for many Android test organizations trying to cope with an ever complex set of results to interpret and make decisions on. So, are you skeptical or intrigued using a composite Android quality score to assist in the release Go/No Go process? Post your thoughts below or contact us should you have any questions about the FAST score for Android.

 

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>