AM

Menu

RHS Ai Chatbot

New AI chatbot tool : Reducing wait-time and providing quick value (b2b/b2c)

44%

44%

User retention rate

78%

Thumbs up feedback

250+ users

250+ users

Have said it solves their query

Headquarters

London

Founded

2023

Company size

100+

Overview and challenge

Born off the back of user frustration around the waiting times to receive advice from the horticulture advisory service team, this gave a case to improve and add to the tool kit inside of the GROW app by using Large language AI to help speed up the user advisory time and help the advisors also.​T his case study shows the discovery and ideation period through to branding and implementation.

How Might We - reduce wait-time, user frustration and provide quick value to users

How Might We - relieve work load of the advisory service and provide support


Research and planning
  • Research into large language models and which would be the best solution for our product and space

  • Understanding best practices for machine learning

  • Interviewing stakeholders to better understand what the business wanted to achieve by implementing this tool


Leading design on
  • Several iterations of the chatbot multi-platform functionality: desktop, mobile and native app

  • A new branding application and iterations to achieve the right aesthetic and for for the company

  • A component library (design), built using Figma

  • Beta release of the tool on web app

  • Several rounds of design QA on web, mobile and native app


Competitor analysis

Researching other products like chat GPT and Bing, Bard and more. Quickly learned that using the tools on a regular would help me understand better how it worked.  Undertook quick courses and read up on how AI chatbots work and large language models function.

Understanding the technicalities behind the tool helped me to understand :

a) What would be surfaced back to the user and how

b) What our potential could be for the tool


User journeys and wireframes

Speaking to stakeholders and developers I understood that the tool would be hosted on its own subdomain, this meant that we were not under as tight of a technical scope as usual when it comes to designing for the site at hand.

There were a few things I had to keep in mind:

  • Designing MVP for beta release

  • Connectivity to other tools (behind login)

  • Lives on subdomain but surfaced behind log in

  • Does this effect any other journey? 

I was then able to map out the user journeys for this project - i started by mapping paths and then translating these into wireframes. I kept close communication with stakeholders and the team to ensure we were aligned on the outcomes as this is a very high profile project.

Brand discovery and implementation

As we were working with a third party agency I had to ensure that regular meetings were scheduled with them to ensure they were also happy and aware of how the project was going. 

The design system currently is in its infancy. At this stage there were only basic components which were app specific and basic style guides. so for the web everything had to be constructed from scratch. 

I did this by:

  • Collaborating with the other designer who was responsible for the main website to figure out how the new styles would work against the old styes and components

  • Speaking with developers to ensure we built a new micro front end to not to duplicate any work or mess up existing libraries as we were also trying to roll out a new branding in increments

  • Testing new components to make sure they worked across web, mobile and app

  • Collaborating with brand teams to ensure they were happy with the use of the core branding elements

Design QA

Design QA is very important to me as I like to be sure that it is very close to or similar to the design in the most aligned way possible. 

I scheduled regular catch ups with the front end developers to ensure that handover was understandable from my side and that we were aligned on what the outcome should be. I was made aware of a few issues with delays in connecting API's to the chat bot so certain features would not be working like:

  • feedback 

  • chat history

As we were working to sprint deadlines and our sprint goal was to have some sort of QA and release for chat bot I had to QA in sections when and where I could. 

There have been several rounds of updates for QA as branding tweaks and messaging needed to be aligned.

I am currently undergoing QA for functionality and interactions as now all the API's have been implemented, below is an example of a video I have created for QA.


See QA video here 👈


Beta release

As this a live project and amendments are being made as we speak, we can expect things to change rapidly as we move into the beta phase and testing only on desktop. 

I plan to:

  • Review the tool and monitor the usage via content square to gain insights into behaviours and why / how users are using the tool

  • Watch session replay to see frustrations and pain points and gather what needs changing 

  • Use feedback survey tools such as usabilla to conduct and open survey whilst in Beta so users can help with giving feedback so we can make necessary changes for live


The more we can see from user behaviours and especially monitoring the types of questions and facts they use the tool for the better idea we can get of improving the tool with maybe new features such as filters, or suggestions based off questions asked. We can also get better with the nature of the feedback and creating a product lead approach as shown in my earlier designs including other tools such as a plant identifier and so on to weave the tools and synchronise them to they help each other and the users can effectively self serve. 


At present the results on desktop we have seen:

  • 44% retention rate

  • 78% positive feedback

  • 250+ users said that the chatbot answered their query