Chatbot Experiment.

2021

CONTEXT:

The conversational marketing team were experimenting with a group of users, exposing them to a chatbot first instead of immediately having access to human-led support.

We wanted to ensure that this change did not affect the customer experience.

HYPOTHESIS:

We believe customers default to chatting with support when they need help because it is faster than looking for resources to answer their questions independently.

We believe this because previous research found that chat is not seen as a formal support medium and is preferred because of the speed of interaction. Customers often begin to chat and then self-serve simultaneously, rather than attempting to self-serve first.

By focusing on encouraging self-service, we can reduce ticket volume while enabling customers to find solutions, quickly.

RESEARCH APPROACH + IMPACT

  • We wanted to ensure that we understood the impact of introducing the chatbot experience for the customer base.

    Customer satisfaction and trust in using the chatbot was paramount as it will not only help the team understand what needs to be improved but to link the customer story to the quantitative data that is coming through from the experiement.

  • We decided to perform a longitudinal study over the course of 2 months.

    SENTIMENT REVIEW:

    We benchmarked the experience level before the experiment started using a lightweight 'SUS' score to measure the user experience before and after the experiment was released.

    INTERVIEWS:
    We also interviewed the users involved in the experiment to understand what their experience was like during the experiment.

    BEHAVIOUR:

    The purpose of the experiment was to understand the impact of a chatbot on support ticket volume. Therefore the research also focused on ticket volume and ticket type analysis before and after the experiment was released.

  • The results illustrated that users didn't really notice the difference in their experience.

    The results from both the experiment as well as the research showed that the experience was not depreciated with the experience change which gave the team confidence in productising the experience.

    Our SUS score illustrated a minimal score change with the overall usability. We saw a -2.3% difference in SUS score after the chat bot friction was added to the help process.

    Margin of error was within the 95% range of acceptability which further enabled the confidence in productising the chatbot experience.

E2E Redesign:

2024

Redesiging how we onboard our Free & Starter customer base

CONTEXT:

The overarching aim is to "Make every new B2B business a success story". This research supports the relaunch of the HubSpot low-end experience, which aligns with a new mission and vision for the new ‘low-end‘ persona. The vision is for the HubSpot low-end experience to be the place business builders come to start and scale their business, empowering them to do more with less in less time, and building a thriving content and community ecosystem to help them grow their careers.

RESEARCH APPROACH + IMPACT

  • The primary objective of this research was to evaluate the comprehension and usability of the newly redesigned sign-up and initial onboarding experience for both Free and Starter HubSpot users. Specifically, the study aimed to identify potential friction points and areas of confusion within the new flow, focusing on whether users:

    • Perceived the sign-up process as easy and quick.

    • Found the presented use cases relevant to their needs and understood their purpose within the onboarding process.

    • Comprehended the value and implications of CRM configuration templates and data synchronization requests.

    • Felt motivated to complete essential setup actions and engage with the onboarding tasks.

    • Understood how the initial experience connected to the in-app environment and addressed their selected use case.

    • Could navigate the setup process intuitively and understand the purpose and content of new interface elements.

    • Generally understood the product's purpose and functionality based solely on the redesigned experience, without explicit explanation.

    Ultimately, the research sought to understand if the redesigned flow effectively demonstrated HubSpot's value upfront and provided the necessary reassurance and proactive support for our target segment

  • This research employed a qualitative approach utilizing moderated usability testing sessions. The study was conducted in two phases:

    • Moderated Prototype Testing: This initial phase involved testing a prototype of the redesigned sign-up and onboarding flow. The goal was to gather early feedback on user interaction, identify significant design or content issues, and inform necessary iterations before potential live deployment.

    • Moderated Live Software Testing: This subsequent phase involved testing specific elements of the live, redesigned software. This allowed for a deeper understanding of user intent, their willingness to proceed with tasks (e.g., data import), and their overall readiness within the sign-up and setup process in a more realistic environment.

    The rationale for using moderated sessions was to directly observe user behavior, understand their thought processes through verbalizations, and employ probing questions to uncover comprehension difficulties and points of confusion that might not be evident through unmoderated testing.

    Participants: The target audience comprised small businesses (1-25 employees) actively using or considering using a CRM. Participants were recruited through a respondent panel and shared key characteristics of the target market persona, including an understanding of technology's value for growth and an intention to scale their business. A total of 15 completed sessions (across both prototype and live testing) were conducted with individuals in roles such as Director, Co-founder, Owner, Operations Manager, and Marketing Director.

    Procedure: The usability testing was conducted over several weeks, encompassing research plan finalization, prototype testing, live software recruitment and testing, and subsequent analysis of the collected data.

  • The key findings from this research highlighted significant issues across three critical dimensions of the user experience:

    • Audience (WHO): The study revealed a lack of effective audience segmentation within the onboarding flow. The appropriateness of opinionated recommendations varied significantly based on the user's stage of digitization.

    • Content (WHAT): The research underscored the critical need for contextual information throughout the experience. The absence of sufficient context in areas such as the implications of use case selection, the reasoning behind recommendations.

    • Timing/Placement (WHEN): Participants reported feeling overwhelmed and confused by the timing and placement of certain requests and guidance. While users acknowledged the potential value of opinionated paths and data import options, introducing these elements early in the sign-up and unboxing flow was perceived as "too much too soon." Specifically, users expressed discomfort with being asked to import data during the initial sign-up process.

    The impact of these results combined with experiment quantitative data, ultimately pushed the team to not productise the initial design and instead focus on more personlised, segmented approach to the design.

Next
Next

STRATEGIC RESEARCH