logo

Recruiting Chatbot MVP

Sense 2020
responsive priview
Background
About Sense

Sense provides recruiting agencies with a suite of tools to empower recruiting teams to do their jobs more effectively and efficiently. Sense automates manual tasks with personalized communication to increase candidate engagement with recruiters, while also integrating with applicant tracking systems to optimize recruiter reach and to keep the pool of candidates constantly updated. When the pandemic hit, we already had an ambitious timeline for launching our recruiting chatbot, but we had to reconfigure the roadmap to sprint to market to stay competitive in a world that seemed to turn upside down.

Problem Context
Mundane Tasks and Product Strategy

Most recruiters juggle several tasks, and cannot always give their full attention to every candidate coming through their recruitment funnel. Conducting calls with candidates is time consuming, and some of the introductory calls can be as mundane as verifying information on a resume or going over a job description. These calls can be fruitless if the candidate turns out to be unqaulified, which can be a frustrating experience for both parties involved. In the Bullhorn Global Recruiting insights & Data Survey of 2019, 84% of recruiters agreed that that staffing and recruiting businesses must embrace digitial transformation to remain competitive - while this is open to interpretation, I like to think that AI and chatbot platforms are part of that 84%.

The next phase of our product strategy, landing us in mid-pandemic year, was to tackle the complex area of sourcing by SMS and web bots. Candidate acquisition/sourcing has been the top priority for recruiting agencies, according to Bullhorn surveys2020and2021.According to those same surveys, the most challenging part of the recruitment lifecycle is sourcing candidates. While it's no surprise the Covid-19 pandemic brought about job shortages, talent shortages have been a top challenge for years across various sectors.

Reframing the Problem
How can we accelerate the hiring process for recruiters, while maintaining exceptional candidate experience?

The goal was to create a chatbot that could automate three of the most common use cases: data collection, pre-screening, and job matching. Our ambitions were to create a strong foundation with our data collection chatbot that embraced one of Sense’s powerful feature - automatically updating data in the ATS based on candidate responses.


  • Create a chatbot that would drive more candidates through the hiring funnel
  • Save agencies money by automating mundane tasks (such as updating ATS data manually)
  • Give candidates a sense of security in their job search through an engaging chatbot
Role
The Team
  • 1 Machine Learning Engineer
  • 1 Product Manager
  • 1 Front-end Engineer

And me, tasked with leading all design efforts for the chatbot team.

Users & Customers
Agencies want to..
  • lower candidate acquisition cost (lowering time-to-fill and interview-to-offer ratio)
  • make their recruiting workflows more efficient at a larger scale

Recruiters want to..
  • spend less time on manual data entry
  • spend less time talking to unqualified or unavailable candidates
  • foster good relationships with candidates in their pipeline
  • up to 60% of recruiter time spend on data entry

Candidates want to..
  • feel like they “matter” to a recruiter or agency
  • have clear and consistent communication with recruiters
Discovery
Understanding Recruiting Chatbots

I had no prior experience working with chatbots or working in the recruiting space - hence why Sense was an attractive place to work, since I would get to dive into something new. I started off with the basics to educate myself by googling things like "What is a recruiting chatbot?", "Examples of best and worst chatbots", "How are chatbots useful in the recruiting space?". No fancy research methods here, just good 'ol googling. I couldn't be sure the next hire (at the time, we didn't have a product manager) would have any knowledge of chatbots or recruiting spaces either, so I compiled everything I could find into a Notion doc to be used as part of the Chatbot Product Team onboarding documents.

I created a feature analysis of a few top recruiting chatbots to see what feature set we would have to include to remain competitive. Even though our first version was going to be a very slim version of what these other chatbots offer, it was important to get a feel for the features we could make available down the line. After all, the most powerful thing about the Sense Recruiting Chatbot, is that it would seamlessly integrate with our Sense Engage and Messaging platforms.

My findings validated what customers were already asking for: we want a chatbot to help automate introductory data collection calls, pre-screening processes, and writeback the data to our existing ATS. When we hired our product manager, we used the list to refine what we wanted to see in our chatbot MVP. We decided to go with the data collection web chatbot as our first team launch, with SMS chatbot being our second roadmap item.

First Design
Designing Reva

A chatbot demo had been in the works for months before I had come on board, with only a vague understanding that it would be a web-based bot that would utilize NLP (Natural Language Processing) to tie back data points logged in a conversation back to an ATS. Our machine learning engineer had been working on an API to integrate responses to writeback to the ATS, Bullhorn, and he was also refining libraries to use for data validation (so we don't end up with trash data). Meaning, if a question asked for a zipcode, the engine would check to see if a valid zipcode was provided or not. My portion of work was to define the style guide and work with the product manager to define the use cases and pick the best scenario to demo.

My early design concepts took inspiration fromCortana- a product assistant in Microsoft 365. Cortana's soft blue glow over an unassuming circular shape felt simple, yet inviting - both traits we wanted to portray in our chatbot.The design of the conversation elements follow standard conventions observed in messaging experiences today (no need to reinvent the wheel). This includes encasing messages in a chat bubble, having a typing indicator for when a message is being typed (or in this case, to double as a loading indicator), and making it responsive and comfortably viewable on a mobile device.

Chatbot Design System

For the candidate-facing chatbot experience, I created a design system derived from exisiting styles used in the Engage platform. The newest addition to the system were the colors for the rating scale. I chose colors based on contrast requirements and made sure to actively design within the latest WCAG standards.

Since this demo was going to be web-based, I pulled in existing button styles and fonts. At the time, our brand designer was working on a new website design and beige was a core color for that, so I experimented using the chatbubbles from the chatbot to be beige- but ultimately felt this was unsighlty and awkward looking. Gray or a lighter version of our "Hawaiian Blue" was favored in my rounds of feedback with the design team and product team because it was easy on the eyes.

Research
Internal Dogfooding

About 2 months in, our first candidate-facing chatbot demo was ready. Internal dogfooding would be our great internal unveiling to our stakeholders and peers about what chatbot team acheived - while providing the opportunity to gain valuable feedback and insights. We created a slack channel for people to share their feedback, screenshots of possible bugs, or anything else they might have to say. The research plan I created focused on gathering feedback on the chatbot experience, and on a more functional note, if the data collected could successfully be recorded back into the ATS (Bullhorn). A lot of the feedback was good, and we also got a lot of interesting opinions that invalidated some assumptions we had.

Feedback & Results
  • Some conversational phrases were difficult for the bot to parse and understand
  • There wasn’t a way to answer “not any” or “N/A” for multiple choice questions
  • Participants noted that having a response confirmation would be nice
  • Error states were unhelpful and vague
  • The tone and voice could be less robotic or a little friendlier

The Updates We Made
  • Bot can now understand conversational responses like "My phone number is 4152221111."
  • Bot will confirm the data you entered before moving on to the next question
  • Validation responses are more conversational, ex: "Sorry, I can't find that zipcode."
  • Provided an out for some questions where none of the answers apply for the user
  • Created a library of phrases to soften the tone and feel more natural
The other part the MVP
Conversation Flow Designer

For the candidate chatbot experience, I was able to test through UserTesting.com to further validate chatbot use cases (data reactivation and pre-screening). We showed the recruiting chatbot to some of our largest customers (PrideStaff and Talent Launch) and set them up with trial accounts to test it out with their top agencies.

Meanwhile, I was concurrently working on the other portion of the MVP - the Conversation Designer. The Conversation Flow Designer will allow users to create and customize their chatbot conversations. This includes writing custom messages, questions, responses, and how collected data points writeback to the ATS. We would also integrate this into Engage Journeys creation, so recruiters can create full campaign flows from candidate discovery to pre-screening, to post-hire check-ins.

Insights & Constraints
Known Insights
  • Customers want a chatbot that can collect and record data to their applicant tracking system (ATS)
  • Customers want a chatbot that can pre-screen candidates for qualifications and required skill
  • Customers already signed onto our platform want it to work with Engage Journeys, so that it can fully integrate into their workflow

Technical Constraints
  • We cannot alter Engage if we fully integrate within their Journeys framework (we have to keep the page hierarchy and layout)
  • Agency variables are unique to the agency: from Agency 1 and from Agency 2 writeback to the same ATS field in Bullhorn
Design Elements
Design elements I created for the conversation flow designer:

The Grid: A subtle pattern to make it obvious the background is a canvas or grid type thing to separate elements of the conversation flow from the background.

The Flow: Arrows were the most obvious choice to denote a linear direction. Thinking about scale, I knew having one node per level/row would have limitation. I decided to experiment and try creating a flowchart + canvas editor hybrid to make it space efficient.

Variables: To write-back to the ATS, agencies use variables that are imported into Sense. There are various kinds of variables, but Chatbot will only have access to Candidate Variables (or Submittal Entity types) to simplify the use case that chatbot will only be used to communicate with candidates in their submittal journeys.

Node Types: Needed to be clearly communicated to the users on what function it supports: message vs question.

Editing Nodes: Wanted it to be flexible to move about the canvas, but it also needed to scale should more fields be added to the nodes. Side panels were a common pattern, but it felt too rigid and navigate the canvas would be severely impacted if you wanted to edit at the same time.

Design Exploration
Conversation Flow Layout

There are many moving parts to creating the conversation flow designer. One part is how to layout the conversation flow. In my research, I’ve seen this done as tiles or cards outlined on a canvas, but I wasn’t sure what our needs were at this stage. I explored a few options: a top-to-bottom linear flow, a left-to-right row by row flow, and an flowchart-esque design where the flow can be more omni-directional. Eventually, we would land on keeping it simple due to technical constraints and went the vertical/linear format because it was easier to position the tiles - now called nodes - on the canvas.

Design Exploration
How to Edit Nodes

The other part of the canvas experience was the editing of nodes. We want to allow users to edit the conversation flow and customize each piece of content they assemble in the flow. For starters, we’ll have some questions they can add, a message node, and we knew down the line - we’ll need a conditions node to compare selected data and route to an appropriate portion of the conversation flow.

My first idea took inspiration from Adobe, where we would have a snappy toolbox that could snap to points in the grid, or be completely draggable. I liked this idea because it was less imposing and can be moved around to view the canvas easily - however, the drag and drop functionality was out of scope for our MVP. My second idea was to use the actual node on the canvas as a form to create and customize content. However, this idea wasn’t very scalable (we knew we’d be adding in more fields). The third idea was to allow editing to happen in a floating modal. Similar the to toolbox idea, but with more real estate to handle more fields, longer inputs, while retaining flexibility to move around the canvas.

Design Exploration
Node Designs

Once I settled on how the nodes should be edited, I explored what these different nodes could look like. The most used node would be our question node. I did some explorations on the types of questions we could support, and researched how various survey builders handle this problem. To me, it made sense to divide this node up into different types of responses it would take. The other part (and the main selling point of chatbot) was making it easy to select an ATS field to writeback to. This feature was essential because it would save users time but reducing the amount of manual data entry they would have to do.

We were fast approaching June, and we wanted to ship a condition node to handle some of the in-flow routing before our big public launch in Fall 2020. At its most basic level, this node would compare the data we collect during the conversation to a selected value (could be an ATS value or a custom value), and then could split the experience down a different path based on the the comparison. And so, we have out first branching use case. This is where the design gets a little bit complicated, because branching a flow into two is just the first permutation of that conversation. Theoretically, users can keep on branching using a conditional node, and the permutations of what your unique conversation could look like could get visually overwhelming on the canvas.

Design Exploration
Branching Explorations

If you think about conversations, they have a tendency to branch off into tangents. Chatbot is no different, other than those tangents are for a purpose. Our biggest challenge was figuring out how to scale these conversations. Every conversation has its limits, and we were just beginning to explore ours. We did know that multiple branches could spell out trouble for us. Our first use cases were simple enough, but what if users wanted a more complex pre-screening flow? What if that meant being able to dive deep into the specifics based on their responses? I went back to the team to gather feedback to better understand what and how we should handle branching for our MVP.


Feedback
  • From engineering:
    Branching is a technical challenge, the more we can simplify it, the better we can scale it in the future.
  • From product manager:
    Branching needs to be easy to do and undo. We don’t want to impose limitations on our users because we want to encourage users to customize their chatbot to their liking.
  • From customers:
    We want the option available, so we can automate more complex interviews with candidates and potentially use it to match candidates with various open jobs.

You can explore my Figma file below:

Figma File
Branching View File
Initial Mocks
Conversion Flow within Engage

For our MVP, we assumed we would need to integrate chatbot creation inside of Sense's main platform - Engage. Enage already had a workflow in place for a variety of compaigns for candidate outreach and sourcing, so it made sense to elevate the product by adding chatbot as part of the Engage product suite. If we merged our roadmap with Engage, then we could borrow resources to build chatbot faster.

We gathered feedback by printing out the mocks and taping them up on a wall for high visibility. We wanted to generate interest from our peers and get as much feedback as possible, and the informality of this led to a lot of other people asking us about the product that would have otherwise not known about chatbot (sales, CSMs, and marketing). To me, it was a cool exerience to chat to those outside our team about what we were trying to achieve because we got interesting perspectives we wouldn't have otherwise learned about if we stuck to our main stakeholders.

After a few iterations, we decided this wasn't the right route for chatbot. For starters, the codebase wasn't the best (we have since optimized, but at this time, it was messy and complex). Any small thing was a huge engineering effort to change because of how it was built. Secondly, we started to think how to elevate chatbot as it's own product - a standalone from Engage instead of an add-on feature. From a business standpoint, this would bring our product suite to 3 with Engage, Messaging, and Chatbot, which would be a huge revenue booster and add validity to our company to investors.

You can explore my Figma file below:

Figma File
Conversion Flow in Engage View File
My Concerns
This might scale poorly.

As development proceeded for PrideStaff, I started to get nervous about the design implications of going with a form-field led modal to edit the nodes. I voiced my concerns to the product manager and the engineering team and we all agreed that if we continued to add more field rows - thereby extending the height of it - then it will eventually get unwieldy.

A year later, we would finally implement a tab-based layout to help scale these modals better. I had been pushing for a better presentation of the edit modal ever since it officially launched. It wasn’t done early because during the pandemic, our focus and priorities were focused on delivery, not improvement or refinement.

Deliverables
Chatbot Conversation Flow MVP

The Conversation Flow Designer was officially launched in 6 months. It was no easy feat, we experienced high contract churn during the pandemic and had to sell customers on chatbot before it was even completed - but we did it, and it helped keep us in business. From a product standpoint, we had to pivot on some things we initially had set out to accomplish. SMS chatbot was one of those key and highly sought after features we had to punt to the next quarter. Below is a Figma file of some of the deliverables I created for our MVP.

You can explore my Figma file below:

Figma File
MVP View File
Good for Business - Despite the Pandemic
Post-Launch Impact

June 14, 2020
Sense announced the Chatbot to the world.

July 28, 2020
Launched the Conversation Flow Designer.

August 6, 2020
Signed our FIRST paying customer on Chatbot (our 3rd product after engage and messaging). VOLT will use Chatbot from Aug 15-Nov 15 (3 mo pilot) for 5 of their top clients. Pilot will be $7500 for 3 months, which is $30k in ARR.

August 11, 2020
Chatbot conversation designer supports outbound pre-screening use case.

December 14, 2020
Signed PrideStaff with Chatbot. Our biggest enterprise-wide rollout for $200k/year, making their ARR the largest customer by revenue. Rollout to all their branches, unlimited bots, $600k annual ARR.

December 20, 2020
We had a stretch goal of selling $250k worth of Chatbot revenue. Today, we have 10 signed ChatBot customers, and blown past that stretch goal of $500K.

While the initial sales may not seem impressive to many, it's important to keep in mind that Sense was in survival mode. The staffing industry was hit incredibly hard by the pandemic, so we were hoping to keep our heads above water, and ended up surpassing expectations!

Throughout this whole process, I’ve enjoyed the fast iterative environment the most. Discovering and testing technical limits with design is always a rewarding experience for me because it forces me to be creative in the face of impending obstacles. I found myself looking back on past designs and wishing I had more time to refine. I truly believe our team did the best we could in the time we had with the limited resources we had access to, but I also believe that the user experience leaves a lot to be desired in its current state.