October 1, 2021
Chatbot Test | Measuring the UX of a Conversational Agent
The consulting firm Roland Berger reports that in 2019, between 16 and 20 million French users were using or had already used a voice assistant. On a global scale, it projects that they will be 8 billion in 2023. These figures could increase even more with the integration of these connected objects into everyday life. This meteoric growth has been made possible by advances in voice recognition, which makes voice interactions with smartphones more fluid. At the same time, artificial intelligence is making conversational agents more and more efficient. These two phenomena reinforce each other and explain the emergence of bots. In this article, find all the steps that will allow you to test your chatbot.
1 | Exploratory interview
As with any interface, we advise you to test it as soon as possible. When creating the bot, in the absence of any analytics data, it is important to conduct exploratory interviews in order to submit concepts, creative ideas, offers, interaction scenarios, etc. to Internet users and mobile users.
The main impact: specifications that take into account the opinions of real customers/visitors. This generally takes place in a one-on-one qualitative interview, in person, or in a conference call. These exploratory tests are all the more crucial as we are in the middle of our collective learning curve of human-chatbot interactions.
It is then necessary to proceed to a user test of your site or your application. The developer behind Warby Parker's chatbot advises using it to smooth the customer journey on existing interfaces. To do this, he recommends starting with a user test of the website or mobile app currently online, in order to determine the “pain points” (factors of “irritation”) of the conversion funnel. More details, methodological advice and results in figures in his article published in Chatbots Magazine.
2 | Mock-up and interaction scenarios
Once the interaction scenarios are drafted, you can have them tested to make sure that the chosen direction is the right one. Choose different testers from the first assignment to avoid bias.
If a prototype is available – ideally, animated mock-ups – the interaction will be closer to the final rendering and the lessons learned will be more relevant.
Write the scenario for the chatbot test:
Keep in mind that the scenario is an expression of your study goals. You should therefore start by listing what you want to learn from it.
We advise you to start the brief with a contextual element: “you are going to test ...” and with an injunction to “put yourself in the shoes of someone who wants to ...”
Then, imagine a test scenario. You can use one of your personas, i.e. an imaginary visitor, representative of one of your customer segments. The scenario is composed of steps that the tester will have to follow, with an objective and questions for each of them.
In a UX test with real users, we recommend asking open-ended questions and setting very broad objectives (e.g., try to contact us, find this info, etc.).
This scenario should be realistic for an actual user. To do this, be careful not to multiply the objectives – or else make several tests. For example, a tester will have 5 to 10 steps, 1 main objective (e.g. book, order...), and possibly secondary objectives (e.g. get information, share content...)
To go further, you can also view our tips for:
3 | Launching pre-production tests
The most crucial test will take place in the pre-production phase. Ask dozens of testers to test your conversational agent. They will find bugs, usability, UX, comprehension, perception and conversion issues, but also positive points.
Ferpection's user testing and UX research platform is perfectly adapted to all of the above tests.
You can complete these UX tests with a functional test. The goal is to have professional testers try out all the scenarios, the CTAs and all the functionalities.
4 | Conducting iterative chatbot tests in production
The analytics data of your site/app and your bot will help you to understand the behaviors from a quantitative point of view and to detect the main moments of exit from the site.
At this stage as well as at the previous ones, remote user testing will allow you to detect the most striking pain points in the eyes of the users. More specifically, you will be able to understand WHY some users have dropped out. Indeed, even if visitors know that a robot is answering them, the slightest mistake is disappointing and can destroy the user's trust. Finally, some platforms like Botfuel offer AB testing of chatbots. The challenge is to start an iterative process, by posing optimization hypotheses and determining the best performing version among those tested on the website or mobile application.
5 | Analyzing the results of the user test
As with any qualitative test, empathize with the respondent and take a step back. Do not be too defensive if the feedback is too direct. Try to understand their experience and the pain points/suggestions/satisfactions they have expressed. To make the right diagnosis, look for the volume, the convergence of numerous pieces of feedback and/or users towards an observation or a hypothesis. An isolated feedback will have little weight. It is a weak signal, which will be reinforced if there are several converging signals (or ignored if it is alone).
Why optimize a chatbot?
Beyond the communication and image issues, chatbots allow:
- A reduction in costs, notably by relieving customer service – and the salespeople in digital stores – from having to answer the simplest questions.
- An increase in conversion rates on a website or a mobile app, by improving the UX and the visitor's engagement, in particular by reducing certain navigation obstacles (see example below)
Even if bots currently online are still not very “smart”, recent progress suggests a rapid improvement of their conversational capabilities in the near future.
Once we understand the business potential of these conversational agents, we need to optimize them, and thus test them.
User testing and interaction scenarios | What are the specificities?
Testers, as well as of your future users, will often be tempted to play at “breaking the bot”, i.e., they will try to find its flaws. Our opinion: you might as well take advantage of the chatbot test by the user to learn as much as possible and optimize it in the process.
There are several ways to achieve this. You can for example:
- choose to not mention anything in the first steps, and then ask specifically to look for and report flaws
- or ask at the beginning of the brief to be kind to the bot and to make its task easier, then fine-tune the exercise as you go along
This will depend on your objectives and the level of “maturity” of your conversational agent.
In view of the trends that are emerging for the next few years, chatbot tests are more relevant than ever! Remember that the study is built on these 5 main steps:
- exploratory interview
- interaction scripting
- pre-production testing
- iterative testing in production
- analysis of the results of the user test
If you have a bot launch or optimization project, do not hesitate to discuss it with our experts!
You may be interested in our Crédit Agricole client testimonial on conversational agents in the banking sector and the chatbot comparison they made.
Did you like this article? Share it 😉!