December 19, 2018
How to write the scenarios for a remote user test?
Carrying out user testing is one of the key stages in any website or mobile app development or redesign project.
When it comes to a remote user test, therefore, it's essential to write a test protocol that will enable you to obtain the responses you were looking for when you later analyze the user feedback.
In this article, through a combination of good practice advice and specific examples, I will show you how to write a good user testing protocol for your remote user test.
Defining the test objectives
What is the main mistake people make when writing a user test protocol? Trying to test everything! This is actually the best way to learn nothing, as the various research objectives will be too distantly connected to each other to permit a consistent user test to be carried out. In order to arrive at a suitable methodology, therefore, it is important to first clearly define your objectives then draw up a coherent testing scenario.
How do you go about defining the test objectives? This is dependent on the stage your project is at and your level of knowledge about the journeys users make and their perceptions. The further down the road you are with your project and the less information you have with which to formulate your research hypotheses, the more exploratory these hypotheses will be.
Important: if your project is at a very advanced stage and you need to refine your definition of the user requirements, carrying out user interviews or holding a focus group will be more relevant than conducting a remote user test.
The following are examples of remote user test objectives:
Ensuring usability levels.
Testing the information architecture.
Evaluating how the design is perceived.
Checking that the website/app is appealing to users.
Verifying the user experience across different devices.
Prioritizing the features to develop next.
To find out more, you can visit our solutions page here:
Your greatest ally when it comes to writing user testing protocol? Empathy.
Always bear in mind that the objective of a test is to confront your hypotheses with the actual reality users experience and at the same time deepen your knowledge about the latter, not convince them of anything or explain anything to them. In order to generate useful feedback for your research, therefore, you need to create written tasks that serve as a framework for the test, not provide a user guide or send marketing messages.
An example of the kind of detailed instruction to avoid:
“Click the user icon to access your account. What do you think of the browsing experience?”
Imagine yourself in the place of the users participating in the tests and use real-world scenarios to engage them in a realistic experience they can actually envisage themselves going through in real life.
An example of an engaging instruction:
“Search for a flight to New York for tomorrow. Describe the steps taken and what the experience was like."
Creating written tasks: exploratory or specific?
As stated in the introduction, the written tasks you create need to match your objectives. It is these very objectives, therefore, that will enable you to orient your tasks, which can be either exploratory or specific in nature.
When you only have a small amount of information about how your interface is being used, it can be useful and interesting to create exploratory type tasks that will enable you to collect a highly diverse range of feedback from the test and develop a better understanding of your users' experience.
An example of an exploratory instruction:
“Go to the home page and tell us your first impressions.”
When you are very familiar with your users' experience and the various potential problems that can occur within that experience, on the other hand, you will then create more specific kinds of written tasks designed to collect feedback about these particular scenarios. Always try to structure these instructions in such a way that they set an objective rather than provide an explanation.
An example of a specific type instruction:
“Last night, you began watching Frankenstein, but you're not interested in watching the rest of it. How do you go about stopping the film appearing on your home page? Describe the steps you take.”
Why you should limit yourself to 10 tasks per test?
Users' attention spans only extend so far, and this has a direct impact on the quality of the feedback collected.
Though there is no magic number where the task count is concerned – as it also depends on how difficult the tasks are – after carrying out hundreds of remote user tests, we have come to the conclusion that to avoid negatively impacting the quality of the feedback obtained, the number of stages involved should be limited to a maximum of 10.
How to test your scenario?
A remote user-test protocol cannot be modified once it's been launched. You therefore need to verify its ability to provide you with meaningful and relevant feedback.
To do this, we recommend testing your scenario with someone from outside the project who has never seen the interface and will need to follow the written instructions in order to write their verbatim account of the experience.
This will enable you to expose any incoherences in your protocol and the interface being tested before launching the test with a larger sample of testers.
When it comes to creating a written user test protocol that will enable you to both glean as much as possible from the users' experience and meet your objectives, the following three points are the most important to bear in mind:
Define precise study objectives based on the stage the project is at and the challenges it involves.
Be empathetic when creating the written instructions and make sure you engage the user in a realistic experience.
Test the protocol beforehand to avoid any incoherences.