How organisations are tooling and testing for mobile quality

Remember 2011?  ‘The Year of Mobile’? And then there was 2012, ‘The Year of Mobile’. And 2013, most definitely ‘The Year of Mobile’.

And 2014? Of course.

That such pronouncements continue on an annual basis is evidence that, while businesses clearly recognise the essential role that mobile now plays, many are still trying to sort out their strategies; put the right people, programmes and resources into place, and organise their digital workflows to reach the top of their mobile game.

Who owns the apps and websites? Who develops them? Who buys the tools? Who makes sure they work? What happens if they don’t? These are largely settled questions for traditional desktop sites but, in many cases, they’re still being answered for mobile. Keynote recently surveyed more than 1,600 mobile development and testing professionals to get a clearer picture of the current mobile testing and QA environment. The results, published in Keynote’s ‘The State of Mobile Software Quality 2014’ survey, describe a discipline that, while universally regarded as business-critical, often still struggles to find its place within an organisation and in its budget.

The mobile landscape in 2014

The need for effective apps and mobile websites is clearly recognised. In a recent Accenture survey of senior executives, more than three-quarters placed mobility in their top five priorities for 2014. Four in 10 companies report that they have aggressively pursued and invested in mobile technologies. At the same time, however, overall most say they have not made substantial progress; less than half say their mobile efforts have been effective.

When Accenture probed further, they found that in about seven of 10 companies, deficiencies included; the inability to keep pace with new mobile devices, systems, and services; no clearly defined, centralised ownership of mobility initiatives within the organisation; failure to develop new or redesigned business processes; and lack of internal and external skills.

Not surprisingly, some of these same themes surfaced in Keynote’s ‘State of Mobile Software Quality 2014’ survey. Many, though not all, of the organisational and operational challenges of mobility overall are present for mobile testing in particular. These include device challenges, organisational structure, and another major stressor, speed.

Pressure to perform

From the survey, one of Keynote’s key questions was whether these companies are feeling crunched by what’s going on in mobile – and the answer is yes.

The quality expectations for mobile apps are actually higher than what you see on the desktop, but the testing time allocated for mobile is roughly the same. The resources invested are actually lower and the release schedules are typically much more compressed. Essentially, they are being asked to do more, faster, with less.

Much of the pressure on the mobile testing team appears to be driven from the outside in, by users. Users are not adjusting their expectations to allow for slower mobile processors, slower networks, and inherently longer latency.  If people have a bad experience on a mobile device, they won’t go back again to that app or site; they expect sites and apps to load in three seconds – some sources even say two seconds.  That pressure to perform is being passed on to the mobile teams but, as noted above, they have limited resources to deal with it. Many organisations still seem to be struggling to gain mastery over mobility.

Where does mobile testing fit in?

Mobile testing responsibilities are handled differently in different organisations and, while some leave mobile web on the back burner, others are beginning to take a ‘mobile first’ approach. According to the Keynote survey, only a small majority, 55.5 percent, have dedicated mobile-specific testing groups; and this breaks out to 19.7 percent with a centralised mobile testing group and 35.8 percent with smaller mobile groups within business units or divisions.

While the testing groups themselves may be split between centralised and distributed, most companies are holding onto centralised purchasing authority. A little more than half report that mobile testing tool decisions are made by a centralised QA or tools group.

What’s being tested on mobile?

By far, the biggest focus is on functional testing – build acceptance testing (BAT), regression tests, new feature tests, etc – with 47 percent of respondents ‘most concerned’ about this testing area. Performance testing, both pre and post-launch, was a distant second with 24.6 percent, followed by usability testing at 21.9 percent.

In a recent Accenture survey of senior executives, more than three-quarters placed mobility in their top five priorities for 2014.

Keynote advises a hardware-based approach, where sites are tested on real devices, as opposed to using software that only shows a theoretical version of the customer experience on offer. It would appear that some testing is already being done on real devices – 58 percent of respondents reported that they do most of their mobile web and app testing on real devices – seemingly showing that in order to ensure that an app or website is going to work on the specific device, companies believe it’s important to test on that specific device.  The survey respondents supported this, as ‘easy access to many devices models’ was found to be the number one most important feature needed for functional testing.

Device and OS proliferation are an ongoing headache for mobile development, operations and testing. Even Apple has seen its form factors and OS variants multiply and, of course, there are literally hundreds of variations of Android OSs and devices. Testers typically choose a manageable sample of devices that are representative of most of their user base, rather than trying to test on a very large number of devices. They’re also pushing validation with real devices earlier on in the development process, which has typically been emulator territory.

It may not be on 20 different devices, it may be on one Android, one iOS, five or even ten devices, but this ability to take a new build and very quickly run through a series of automated tests across a sample of devices is very important to companies. This enables them to deliver a much higher quality build to the QA team with high confidence that it has no fundamental problems.

The question of automation

‘Easy automation capabilities’ was the second most important feature for functional testing, after access to many device models. But while nearly 60 percent of respondents have employed automation, only 14 percent have automated more than half of their mobile testing. Resource constraints are one likely reason for this apparent under-utilisation of automation.

Automation in mobile is typically employed for regression testing of new releases – doing the brunt work of testing all of the existing functionality in an app or website to make sure none of it was broken in the new build. This speeds up the release, and frees up the QA people to do higher-value testing – like trying to break something.

The other main use for automation in mobile is the above-mentioned build acceptance testing. Automation supports a more agile process with more frequent releases that might be typical of some organisations, such as Facebook, for example, that may be doing multiple releases daily. Automated BAT supports a faster build-release cycle characterised by small changes and a more streamlined QA process.

Powerful new scripting

Object-level scripting has enabled a new level of granularity and versatility for mobile test automation over the last year or so.  It gives us the ability to get at the object instead of just looking at what appears on the screen and matching it based on text recognition or image analysis.

Object scripting is actually looking at the code, which is a much better way to drive your scripts. It means the script that you write for Android device A is also going to work on Android device B. That means you have to write a lot less script, it’s much easier to automate and when you make small changes to your UI, it’s not going to break your script. It therefore means a lot less maintenance.

Top testing challenges

Perhaps surprisingly, there were no screaming red flags in terms of mobile testing challenges; on a scale of one to 10, none even broke a seven. Tools, time and test devices were the top three challenges, followed by testing methodology/process and availability of mobile testing experts.

The whole industry, in general, is playing catch-up to the fact that there’s a really steep adoption curve in mobile.  As a result, budgets are being put into place and effort is being put towards it.  Organisations have to plan for staying ahead of the market. Right now, they need to be looking at the vast increase in traffic, the increase in sales, the transactions that are being done on mobile devices, and ask, where is this going to be a year from now? What tools and processes do we need to put in place?

One thing is certain, there is not going to be less traffic, transactions or sales on mobile devices. For many uses, mobile has already exceeded the desktop. It’s a top priority for retailers, financial institutions and all manner of media companies.  It’s not too soon for test organisations to get their tooling, device access, and processes into place to be able to thoroughly vet the apps that are in production now, and the bigger and better ones that will be in production tomorrow. Because who knows? 2015 might just be ‘The Year of Mobile’.

Thomas Gronbach

Thomas Gronbach

Contributor


Thomas Gronbach is the Digital Quality Expert at Keynote