The chief purpose of usability testing is to have real people accomplish actual tasks on websites, cell phones, hardware, and software.
Of course, the first key step is to identify what the users are trying to do. After that, you can decide on the tasks that you want to test. To achieve this, you’ll need to create genuine test scenario for users to test or attempt.
A task comprises of various steps a user has to go through in order to accomplish a goal. A test scenario illustrates the activities the test user is trying to carry out by offering necessary details and some context to achieve the set goal.
To craft an ideal test scenario, you have to balance between providing the user with enough information so as not to leave the users guessing about what they are supposed to do and not providing all the information that will leave the user with nothing to simulate. Remember that the user needs to mimic the nonlinearity and discovery of real-time application usage.
So, how do you ensure that your test scenario is well balanced? Here are seven tips that will guide you:
1) Be Specific
Avoid generalizing your tasks. For instance instead of telling your users to find house appliances, be specific and tell them to find a dishwasher under $800 that has been rated well by customers. In short, give your participants a clear purpose or reason for performing your intended task.
With a clear purpose, users will quickly narrow their search based on the product attribute, recommendations, and indicators of quality. Giving them general ideas of what they are supposed to do will only diversify their scope and they might end up guessing which will not bring out the intended results.
In an imaginative world of usability testing, giving users vague purpose will make them encounter problems, and they’ll begin looking for a moderator as to what you want them to find. Avoid being too vague as that will lead to guess work on the part of the users regarding what they are supposed to do. For instance, “you need them to visit the finance department on August 10 between 10 am to 1 pm.”
2) Don’t Instruct The User On What To Do
While it is good to provide users with specific instructions and purpose, giving the specific directions about how to carry out the task will interfere with results. If you lead the users too much will be less helpful and might lead to biased results. For instance, instead of saying to them “click on the drop-down menu at the right corner of the site to submit your report,” just say “Submit your report through the site.”
3) Use A Language Users Can Relate To
Using company terms or jargons when explaining to the users what they are supposed to do might end up confusing them. If the users are not well conversant with the terms used in the test scenario, it can lead to outright confusion or false test results as they’ll be working using imaginations and guesswork.
Using terms such as “liability” when referring to their loans and “assets” when referring to their children’s college funds will only bring confusion as the ordinary user is not used to such terms. Will the user in your test scenario know what a “mega menu” or “item-page” or even a “configurator” is?
Therefore, you need to be mindful of such terms.
4) Have Correct Instructions
If the users need to go to the finance department of a specific organization, they need to be provided with the correct office number. This makes the test more straightforward for the user and will allow you to know whether the task was or wasn’t carried out successfully.
The problem with “visit a finance office of the nearest organization” task is that the users will be in the state of mind of looking for information to solve the problem. At the time, maybe the nearest organization might not be opened, and they will be more interested in getting the test completed and collecting their dues. This might lead to incorrect conclusions, and you might end up inflating basic metrics such as task completion rates.
5) Avoid Making The Tasks Dependent of Each Other
It is important to have independent tasks that don’t depend on each other. For instance, telling the user to create a document in one task and then delete the same document in another task might end up making the user get into a mess if they fail to miss this step. Therefore, it is important to have alternating tasks to reduce such instances. Do your best to avoid dependent tasks.
However, this might not be possible if you are carrying out a test on the installation process. In that test scenario, you need to be aware of the biases and complications that might be brought by these dependencies.
6) Provide Context But Keep Your Test Scenario Brief
The purpose of a test scenario is to provide context that will leave the user thinking as if they were supposed to actually carry out the task. However, you should not go overboard with explanations. For example “You’ll be visiting the finance office of the ABC Company in August and need to write a report of the financial activities.”
7) Test Scenarios Differ For The Moderated And Immoderate Testing
The idea of test-scenario writing has been improved over the years through the moderated lab-based testing. But if you are executing an immoderate usability test, it calls for extra refinement. You just can’t rely on a moderator to guide users through a test and ask them what they would expect.
You need to be more explicit without being to explanative. You’ll need to provide names, price ranges, and brands of the items to be tested. While many people see this as giving users specific direction, there are rare occurrences of a task completion rate of more than 90% in immoderate benchmark studies.
Even with the specific details explicitly explained, users get confused in the checkout procedures, navigation, or by simple things like organization terms in complex organization websites.
THE BOTTOM LINE
It takes considerable time to learn how to balance between not leading the users and making their tasks too difficult. There are no “ideal” or “perfect” tests, and so, you need not be afraid to tweak details to suit the different tests (moderated vs. unmoderated) or goals (findability vs. checkout) that you might have. You can even read the test scenarios loudly instead of having them on the screen or printed.