Remote User Testing – 10 tips to improve your user research

We’d like to present 10 points that are worth remembering while creating a remote usability test. The list was created on the basis of our experience and during cooperation with our customers.


Every UX test has its own “edge” – chosen methodology, number of participants for the study, pros and cons. Some elements of the test will be pretty similar, but the great ideas from one type of usability test can’t always be transferred to another one. In remote non-moderated tests it’s really important that the user is not in contact with us. What is more, users are also not in contact with a moderator who would help in case of problems. This is why the chosen form of communication is really important.


We’ve gathered several hints and details which can help in creating remote usability tests. So let’s get to work :)

1. One test will not answer all your questions


Researchers often face a temptation to include as many areas to be tested in the study as possible. This means that i.e. in one usability test you’d like to check how light users and hard users behave while browsing the site, what problems users will experience at a key path and at a few additional paths at the site etc. The consequences of such decision can be terrible e.g. the entire test will not be reliable and, as a result, it would need to be conducted again.


One of the real-life examples: an additional functionality which would be incredibly helpful but could be used only by a small group of users (hard users) cannot be evaluated by a group of testers who are light users and will not understand it.


The truth is that the most effective tests are the ones which are based on the principle of Occam’s razor – we should limit the number of assumptions and focus on the most important one. This is why the best method is to choose the key element and focus only on this one.

2. Limited number of tasks in a study


Moderators of usability tests often think that 10 tasks in one study is the right number because they can go through them quickly by themselves. They couldn’t be more mistaken! Creators of the test assess it subjectively, from their own perspective. The fact that a moderator would go through a test in 15min doesn’t mean that it will take a tester the same amount of time.


What are the main reasons? The researchers will click through the website more quickly because they already know it. The user doesn’t. To begin with the user will spend more time on the initial “getting to know the site” and secondly it will take him or her some time to get used to the specifics of the remote testing itself.


Size matters, but here it works the other way round :) On average after five tasks (or around 10-15 minutes) users start to feel uncomfortable – they get tired and don’t pay this much attention to completing their tasks as they did initially. And let’s remember it’s not only the length of scenario tasks but also the time spent on responding that should be taken into account here.
This is why we recommend adding around 10 minutes to the study and limiting the number of tasks to 4-5.

3. Adding user context to the tasks


You need to remember to always stimulate users’ imagination and add appropriate context to the tasks they complete. Even though we recruited users who have purchased a TV set in the last 6 months prior to our study, they probably can’t remember their decision-making process clearly. Also, if we’re discussing first impression testing it’s not enough to show the site to users and ask them what they think about it just like that.


Our study is not a truly natural context for the use of a site. Most of the time users are directed to the site from some source (e.g. an ad, search engine, social media channel). This is why they already know what they can expect from the site (cognitive context is already activated before entering the site and from the cognitive perspective the only aim of the site is to confirm that the user has come to the right place). When you’re searching for a cabinet you go to a website offering furniture, when you want to buy a book you access an online bookstore.


This is why, before showing a site to users and asking what they think about it remember about adding some situational context e.g. “Imagine that you want to buy a TV set as a gift for your parents and you’ve just entered the site that you’ll see in a moment. Take a look at the site and answer questions below”. By means of that it is easier for users to understand what they’ve just accessed, why they’ve done that and what they want to do there.


Additionally, the context can be extended even before users will start completing the task. Asking a few leading questions even before the task starts is a good strategy (such questions help participants to recall a given situation). Examples of such questions are: In our entry questionnaire you’ve marked that you were searching for a TV set in the last 6 months. Try to recall this situation. Where were you searching for the TV set? Which websites you used most often? What was important for you back then? Was there anything that positively or negatively surprised you during your search? Did you succeed in your online search? These questions reflect entry interviews conducted during a typical lab test. On the one hand, they help participants of the study to get into the context, on the other hand they enable us to understand participants as individuals – their way of searching, etc.


But let’s remember that in the qualitative research these answers are declarative and we shouldn’t apply them to the entire population of website users :)

4. No suggesting questions in tasks


Remember not to use suggesting questions in tasks. If you want to test a given functionality in a task don’t name it directly. If you do it, users who participate in the study will be focused on a given word and will scan the site in order to find it.
An example in this context can be testing “Favourites” functionality at a website offering apartments. Your scenario shouldn’t look like this: “You’re considering buying an apartment. Find few interesting offers and add them to “Favourites”. But it can look like this: “You’re considering buying an apartment. Browse through the offers and choose 3 most interesting for you. Find a functionality that will let you save them so that you can easily get back to them later”.
Similar approach should be adopted to questionnaires, ask open questions e.g. How would you rank the site in the context of the task that you’ve completed now (1- problematic, 5 – easy to use)?

5. Soft-launching


Soft launches should be used! Firstly, on yourself – to find typos, functionality issues and then on a small group of users – friends, colleagues (e.g. 2 people). They will let you verify clarity of tasks, determine time needed to complete a test and check results’ display and analysis possibilities. Before the final launch of the test we’re still able to configure these elements and modify them if there is a need.


Soft-launching is important because participants of the study can sometimes misunderstand what we meant in a task. Interpretation of our written text can sometimes be far from our expectations. This is why we should check if our tasks are clear and only then recruit participants for the test.

6. Paying attention to the contents of Welcome page and Thank you page of the study


While creating a usability test most of the time we pay most of our attention to its central part: concrete tasks that users will complete. We can’t say that they are unimportant, but remember that the first screen that the users will see is “An invitation to the study”.


Let’s face it – by taking part in a test users do us a great favour. If we don’ have any prize for them, they don’t get compensated for participation and loose their time. We need to remember that not many people like doing things pro bono and it’s even more difficult to find them. This is why you should appreciate their participation and highlight it. Think what phrase would convince you to take part in such test.


It’s really important to emphasise the rank of the study and how it will help the creators to develop their product. Especially if we work with our clients who already have their opinions on the company. The same goes for the ending of the test where you thank for participation in the study. An ordinary “Thank you for participation” can be not enough.


While writing the contents of an invitation remember about the following information:

  • the aim of the study
  • what is the study about
  • a structure of the study e.g. during the study we will ask you a few questions and would like to ask you to complete the task at the website
  • duration of the study
  • information on what would be the next step e.g. in a moment you will download a harmless program which will register your actions at the site
  • information that participation in the study is voluntary
  • anonymity
  • information about desktop and image recording (in case of tests where users’ reactions are recorded)


It’s important that the text is not too long – try to fit into two paragraphs with 3 (max 5) rows. Because no one will read through a “wall of text”.

7. The ergonomy of questionnaires


Questionnaires are a very important element for the entire study to succeed, this is why you should pay close attention to them.
Constantly enquire and use questionnaires as often as possible.

Even though in a functional test users are asked to comment out loud what they are currently doing in a test, they often skip it for various reasons i.e:

  • Talking to the computer and describing where you click and why is not a natural behaviour :)
  • The task itself e.g. finding something, is so occupying that people forget about commenting and focus on the task itself
  • Some people prefer describing something in writing rather than talking about it – this issue occurs also during lab testing when a moderator has to ask additional questions to the more reserved participants.


This is why you should ask participants of the study about how they completed the tasks. You should ask them if something surprised them, how they would evaluate the difficulty of achieving the goal, if the tested website was helpful in achieving goals within the task.


What is also important is the fact that people get used to the product they are using and forget what they were doing at the beginning.
If we ask participants after three tasks how they would evaluate the site, they will evaluate it through the whole range of experiences they had with a site. But they will not remember their experience with singular tasks. This is why we should use questionnaires – ask questions after each task and evaluate the whole impression of the site by the end.

  • People generally prefer choosing from given options than describing their opinions. This is why you should be careful with a number of open-ended questions. One open-ended question per one questionnaire is enough.
  • Remember about adding answers like “I don’t know” and “I have no opinion” by the end of the questionnaire. It seems like a very basic element but very often you can forget about it while creating a scheme for your study. Adding “I don’t know” and “I have no opinion” answers to your questionnaire is really important in most cases. Users should have a free choice in their opinions and forcing them to choose only from our pre-defined answers is not an option. When will adding these answers be not necessary? If it comes to questions which concern the facts that we want to test, not users’ opinions. These questions include: enquiries about gender, age, residence which are the standard metrics.
  • Some questions can also be handled using card sorting (e.g. choose elements/features which will be the best to describe the website. Add the remaining ones to “others” basket).


Try to avoid it! Mostly to take into account the convenience of testers – it’s much easier to click through answers than to drag and drop them.

8. Drawing conclusions by looking at differences


This point is especially important for e-commerce websites. While testing this type of websites moderators often fall for misleading conclusions – if testers found a product from one category the website doesn’t need to be modified.


But reality can be different – the fact that users found one product e.g. a hair dryer, doesn’t mean that they will find a TV set so easily. Characteristics of one product category can be incompatible with another category. This includes search filters, terminology, number of products, the way they are described and many other elements. This is why users will differently search for products in the so called high-end categories (which are often used, perfected, etc.) and differently in the low-end categories (used less often).


These are three possible solutions to this issue:

  • The first one is to skip secondary categories in the study. Using Occam’s razor we just exclude the unnecessary variables by creating tasks for one / two product categories with the highest traffic. Of course we can’t be certain that this experience will apply to other categories.
  • The second one is to create 2-3 tests on small samples of users e.g. 5-8 participants for each product category. By means of that we will gain a sample of information for each category and treat these conclusions holistically.
  • The third (and the worst from the perspective of conclusions) is to ask participants of the test to complete three tasks in a row. Theoretically, with this approach users will “tap” each category, but on the other hand they will complete each next task from the perspective of the previous one (using models and behaviours that worked well before).

9. Testing unfeasible paths


The aim of UX Designers is to create cutting-edge solutions, the aim of UX Researchers is to test them. But not everything can be tested and easily explained to users. What is meant here is a scenario in which a test is based on using a functionality which is inexplicable and unnecessary from the perspective of an aim of the test (it doesn’t mean that the functionality is not useful, but users will need to understand its purposefulness in order to want to use it). This means that this kind of functionality will be more difficult to be described in a user scenario.


While creating such functionality we’d like to test it and verify how easy it is to find and use. At the same time we’re not sure how to explain it to users without leading them and telling them too much. And above all if it comes to such functionalities users could at first ask themselves “why would I do something like this at that site anyway?”.


In such case we need to face the truth – not all functionalities can be tested. Sometimes it’s just easier and more convenient to meet with users and talk about such an option, explain where it could be found etc.

10. Drawing conclusions from Google Analytics data


When we test already operating sites we shouldn’t focus only on our assumptions. We should first analyse statistics, source of traffic, connection paths. Segment the traffic to observe the trends. Only than should we analyse separate types of segments.


By means of that we will get more comprehensive results on the usage. What is more, after completing the test we will be able to address a given demographic segment more efficiently and estimate the productivity of the study and ROI.



For downloading Remote Usability Testing Checklist – our UX Checklis to remote usability testing, leave your e-mail

Thank you for leaving data.

Download Checklist by clicking this link


Comments

Comment count:

Be the first to leave a comment.

You want to learn more about your customers?
Contact us for free advice

Igor Farafonow

Currently CEO at Uxeria. Information architect, designer. From 2007 involved in numerous web, mobile and desktop projects. In his free time a fan of Thai cuisine, old sports cars, photography and Jamaican disco polo.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>