Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› User Acceptance Testing ›› Tips and Tricks for Mastering the Art of Mobile App User Testing

Tips and Tricks for Mastering the Art of Mobile App User Testing

by Joe Bland
12 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As a fundamental requirement for creating optimized UX, user testing for mobile apps is no different to user testing for any other product. 

But while the concept may be familiar, the process of mobile app user testing has its own unique nuance. Mobile flows tend to be more step-through with fewer choices per screen, and mobile goals tend to be more strictly for a single person only. From a practical perspective, mobile testing requires tools which operate with or closely approximate the single-user small screen.

An effective mobile app user test needs to address these challenges through careful planning and implementation.

How to properly prepare for mobile app user testing

Defining your target audiences

Deep understanding of a product’s users is often a foundation of successful product development. So, even if you’ve been working on the same product for a long time, describing and refining your target audience profile is an excellent way to kick off user testing mobile apps.

A shortcut here might be user personas that you or others in your organization have created. To go deeper, sales and CRM data may offer insights on your current users. Furthermore, social media and online ad platforms can provide ‘look alike’ profiles which reflect product users.

If you’re user testing a mobile app that is new to market, or significantly different to something it will replace or augment, generally available data sets, such as census and keyword tools may help. Competitor websites and apps may also indicate the typical target market in your industry. A relevant exercise is to write a list of the features and associated benefits of the product, then work backwards by  listing the implied jobs-to-be-done, job titles, training/education, location, history and so on, to create a picture of the user. While not as solid as user data, these approaches will provide a good starting point.  

The other target audience is, of course, those people who will read your final report. When you know who they are and what their goals are, you can select appropriate test methods and reporting formats, and define the goals for your test.

Selecting an appropriate mobile user testing method

User tests are about how users will behave in the app, whereas usability tests are concerned with ensuring the app delivers its purported features. Different mobile app user testing methods and approaches are used for different outcomes, and are chosen for:

  • Suitability to the product being tested
  • Time, space, participant, and monetary cost reasons; 
  • Helpfulness to the people who will receive the test report.

The following table outlines common approaches to user tests, any of which may be appropriate depending on your test goals.

ModeratedUnmoderated
In-personA moderator meets with the participant at a lab, office, home or other location and acts as a neutral guide through the test.The participant attends a lab, office or other location and receives written or recorded test instructions and feedback, or is observed by a tester who does not give any guidance.
RemoteA moderator conducts a video call with the participant and acts as a neutral guide through the test.The participant conducts the test in their home, place of work, or other ‘own’ location, receiving written or recorded test instructions and feedback.

Hand-in-hand with the approach is the method to be used. The most common mobile app user testing method would be an exploratory prototype test, where the participant tries to achieve a given goal in the app. The prototype can be as simple as rudimentary paper wireframes, or as complex as a high-fidelity, functioning Figma or Sketch prototype, or even the finished app itself. Other test methods include first click tests, session recordings, 5-second tests, preference tests, and card sorting. Not all of these are academically considered ‘user’ tests but any could be conducted in relation to a mobile app.

Moderated user tests tend to result in qualitative insights: how and why does the app perform? However, quantitative (statistical) feedback may also be found in a mobile user app test, either by recording the frequency of each user action, or simply by having a larger participant pool in an unmoderated test.

Particular to mobile app user tests, methods which track the user across the app can be more appropriate than shorter-form tests, as reaching a goal in an app may require more clicks/taps than a desktop experience.

Creating a mobile app user testing plan

A meaningful test will not happen by itself: you need to write a plan! Although we’re outlining all the parts in detail for this article, once you understand the elements, a mobile app user test plan should come together with minimal effort, and not take a long time. WIth your audiences, approach and methods in mind, we suggest the following plan outline.

FundamentalsTarget market, enquiry, goals, metrics, time, and money.
Set upThe test approach, method, tools, spaces, equipment, and technologies to be used.
PeopleList the ‘staff’ roles required to run the test and the person assigned to each role. For participants, how will they be sourced, and do they have any particular requirements?
RigourWhat are the risks to the integrity of the insights gained from the test, and how will each of these be mitigated?
OutcomesWho is the report audience, and how will they use the report? Outline the artifacts to be generated, who is responsible for each, how they will be shared, and if any further action is required after sharing.

Identifying the key metrics to track

Hopefully you already have a clear idea of what your app does and how it helps users. Likewise, you have a reason for conducting an app user test. For example, 

  • We make it easy for users to buy flowers for their friends.
  • We are testing a better way to select gift wrapping in our app.

‘Easy’ and ‘better’ are qualities. Directing both of these understandings to the quality of user experience ties a test to helping users, which should be the ultimate purpose of product development. But, while user testing may gather great qualitative commentary, numeric insights can present a more immediately understandable picture to your report audience. Furthermore, with large, unmoderated, remote tests, quantitative feedback can become unwieldy. 

Metrics are therefore used to illustrate qualities through numbers. While you can still collect qualitative information where appropriate, you should also convert each qualitative enquiry into something which can be measured numerically. Common metrics include:

  1. Success: does the app user reach each goal (pass/fail)?
  2. Efficiency: how long does a user take to complete the test (number of seconds)?
  3. Satisfaction: how satisfied is the user upon completing the test (scale 1 to 7)?
  4. Confidence: how confident is the user with their selections in the test (scale 1 to 7)?

Identifying the key metrics for your app user test will largely depend on your particular product team’s focus and goals. A team walkthrough of your product experience is highly likely to uncover goals, questions and associated metrics, if you are looking for further inspiration.

Choosing mobile application user testing tools

There are many digital tools to conduct mobile app user tests, from analytics and heat mapping, to online test and participant platforms like UXtweak, User Interviews and others. These should be utilized for any of the test approaches, from remote unmoderated, to in-person moderated, and will help run a smooth, credible study while reducing your effort and costs.

Choosing the specific mobile usability testing tools for your app test will mostly be determined by your personal and network experiences, any incumbent tools, and your budget. If you’re new to all this, we recommend trialing the options as far as possible before paying for anything, and taking any online reviews or guides with a grain of salt – as always, do your own research!

We recommend you seek out an online platform which:

  • Allows a mix of qualitative and quantitative questions.
  • Supports Figma prototypes (or your other prototype of choice)
  • Provides timers, headmaps and other analytics
  • Provides many testing methods
  • Provides the ability to test on desktop and mobile
  • Integrates with or provides a test participant pool
  • Can require participants to take the test ‘mobile only’
  • Has good reporting features; 
  • You feel comfortable and capable using it.

For in-person tests, we also recommend having some kind of room-recording facility, so you can capture the participants off-screen finger movements, and total body language. 

If you’re looking to test beta versions of your app on live devices, your engineers should be able to guide you through Apple TestFlight and Google Play Console. And, if you’re interested in engaging a consultant to conduct mobile app user tests, they are probably better equipped than us to help you understand their services and costs. 

Any user testing tools you choose for mobile should be assessed on their abilities in regards to mobile tests – some tools are great for desktop experiences but only offer limited mobile app features, or do not adequately reflect real app use.

Conducting mobile user testing effectively

If you’ve made it this far, you’ve identified your audiences and test approach, written a plan, identified metrics and chosen some killer tools – you are well on the way to an effective mobile app user test!

So just a couple of pointers to add. Firstly, run a pre-test with any people at hand in your office or home. This should make for a smoother final test. Second, for tests which run in any kind of sequence, including a running order in your test plan, with times and/or dates, can help keep things on track.

For bonus effectiveness points, double down on your test plan’s ‘rigor’ section. Question marks or errors in the insights you collect have the power to render your test ‘ineffective’ in part or in whole, drastically reducing the value of testing.

Recruiting participants

There are a few ways to recruit mobile app user test participants, and again the process you choose will be relative to your individual circumstances. The common user testing recruitment sources are described in this table.

Who they areProsCons
Existing usersThose who already use your app. Your CRM may have them in segments.The most relevant to your app and/or particular test.Reputational risk to your brand from unsatisfactory tests. Already inducted to your ways. 
Potential usersThose who fit your user demographic/profile but are not currently users.Possibly a large group, they are the people we want to win over.Hard to reach and engage.
AgencyParticipants hand-selected by a digital, ad, PR or other agency.The ‘right’ people and you have good access to them.Costly, small cohort.
Online user poolsA large set of users crowd sourced and profiled by online platforms.Easy to access, lower cost, plentiful.User profile is less clear than other options.Indirect access so follow up is more difficult.

Running user testing sessions

Conducting a remote, unmoderated user testing session is as simple as preparing your prototypes and completing the test set up in your platform of choice (and paying the test fees!) For moderated and in-person mobile app user testing, we suggest the following order of events.

How to run user testing for mobile apps?

  1. Write the test plan
  2. Prepare the prototypes to be tested
  3. Prepare a script, outlining the tasks
  4. Recruit test participants
  5. Set up the test environment, including devices and recording methods
  6. Conduct the tests
  7. Create, share and discuss the test report

For the user testing session itself, we recommend this order.

  1. Introduction to the participant being clear about goals and limitations, and usually stressing the requirement to ‘think out loud’.
  2. Give some warm up questions to ease the participant into the session and get to know their circumstances, for their current day and for their lifestyle in general.
  3. Conduct the test tasks which will vary depending on the methods you have chosen.
  4. Outro and any discussion including follow up questions from the participant to the tester, if appropriate.
  5. Session review with your colleagues. Even if they did not attend, it can be helpful to debrief.

Observing and documenting user behavior

Again, unmoderated, remote online tests are great for recording most user behavior, and can provide all the heat maps, analytics, timers, and recordings you’ll need. 

In-person and moderated user tests should also include room-recording to capture speech and body language. Ideally, those kinds of tests would include both a moderator and a note taker, who can each focus on their own tasks. Note takers should take wide observations from the user, remembering to note body language, distractions, and meaningful hesitations.

Analyzing the user testing results 

If you’re working in a team, a post-test review session can help to shape interpretations, or a direction for how to present the results. In general, you’re looking for patterns which relate to the validity and performance of the test app, which can be evident in graphic feedback, words and behaviors.

Test sessions may result in a dump of observations both personal and platform generated. The primary task then is to identify the most significant points to your testing goal. It’s a classic signal v. noise scenario, where your job is to separate each observation into one of four categories:

  1. Directly relevant to the testing goal – this confirms or invalidates a hypothesis or issue.
  2. Indirectly relevant to the testing goal – this is not part of the testing goal but is affecting the testing goal results.
  3. Seemingly important but irrelevant – this highlights something significant for further attention outside the current test
  4. Seemingly unimportant and also irrelevant – this can be ignored.

Observations in categories 1 and 2 can be described in detail; illustrated with charts; and supported with additional data from the test and from outside sources, to help them have appropriate impact.

5 Tips for a Successful Mobile App User Test

  1. Make it mobile – tests performed on a mobile device are more accurate to real use.
  2. Be rigorous – setting your test up for valid results is the most important part of the test plan.
  3. Choose the right method – a single click may be all you need.
  4. Make it simple – streamlining the test plan, prototypes and report will keep you sane and make more impact.
  5. Keep an open mind – be strongly biased to neutrality, from the outset until you start forming conclusions in the final report.

5 most common mistakes of user testing mobile apps

Here are some of the most common mistakes you can make when performing mobile app testing:

  1. Failing to define clear goals
  2. Only testing with your friends and colleagues, instead of real users
  3. Forgetting the context of use
  4. Ignoring, inadequately sharing, or complicating the results
  5. Assuming the test user’s mobile device expertise is the same as yours

Final 5 Best practices for mobile usability testing 

  1. Testing early and often saves rework, heartache, and money.
  2. Incorporate test user feedback to improve the test itself.
  3. Regularly revisit and refine the test strategy of your organization or team.
  4. Make user testing a collaborative effort.
  5. Put as much enthusiasm into your test plan as you put into your prototypes.

Conclusion

In spite of the deep dive this article gives you, mobile application user testing is potentially simple to undertake, while undoubtedly powerful in creating the best app user experiences. Whether all you have is a prototype, a platform, and some users; or you’re going large with in-person moderation, within a day or two you will have insights to inform the development of delightful app UX.

FAQ: User testing mobile app

What is the difference between user testing and usability testing?

Although the terms are often used interchangeably, there is a real difference between user testing and usability testing. User testing is the process of observing and recording people’s actions when they’re interacting with a product. Usability testing is the name for test methods used to understand if the product is able to perform its purported action.

How do you perform usability testing for mobile applications?

During design phases, usability testing can be performed on any prototype, from the most simple paper model, to complete, high fidelity, interactive prototypes. These are usually presented to the test user through a website, which can be accessed on mobile, using a user testing platform like UXtweak

For coded beta or live-deployed apps, the test subject app is loaded through Apple TestFlight or Google Play Console, and the session is recorded with a screen recorder and/or external video camera.

What are the benefits of mobile usability testing?

We all want the apps we make to help users do things which improve their lives, right? Testing your app during and even after development, with real users, gives us the best chance of achieving that.

post authorJoe Bland

Joe Bland,

Joe is Head of Design for a fintech startup and a writer for UXtweak - powerful research tools for improving the usability of digital products, from prototypes to production.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • The article explores the optimal procedure of conducting mobile user testing effectively.
  • The key steps of mobile app user testing include:
    • Defining your target audiences;
    • Selecting an appropriate mobile user testing method;
    • Creating a mobile app user testing plan;
    • Identifying the key metrics to track;
    • Choosing mobile application user testing tools;
    • Recruiting participants and running user testing sessions;
    • Observing and documenting user behavior;
    • Analyzing the user testing results.

Related Articles

Out-of-the-box design process by Gett mobile team.

Article by Moishy Neulander
How Insight from Netflix Profiles Doubled Our Conversions
  • The article discusses how Gett, a taxi booking platform, leveraged insights from Netflix to implement a profile selection feature, significantly improving user experience and doubling business conversions.
Share:How Insight from Netflix Profiles Doubled Our Conversions
5 min read

A framework for diagnosing a study with conflicting or misleading results, and what to do about it.

Article by Lawton Pybus
How to Think About UX Research Failures
  • The article examines how UX research studies can fail due to issues in design, analysis, and generalization, using case studies to highlight each category’s importance in maintaining research integrity and relevance.
Share:How to Think About UX Research Failures
6 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and