Background

Smart Interview Live (SIL) is a video conferencing software specifically designed for structured interviews.

It has the capability of enabling users to use an interview script with questions selected from a scientifically verified question bank.

These questions come with specific evaluation parameters and rating scales that allow objectivity and standardisation across the firm.

Role & Duration

Role: Lead UX Researcher | SHL

Team size: 2
(UX: 2)

Duration: 1.5 months
(Aug 2023 - Sep 2023)

Introduction


Responsibilities

  • Conducting interviews with stakeholders to understand key unknowns

  • Putting together a research plan to conduct both quantitative and qualitative user research

  • Mentoring junior UX designer in quantitative user research

  • Conducting quantitative research via surveys

  • Conducting qualitative research via in-depth-interviews

  • Conducting usability testing via in-person and online moderated testing sessions

  • Analysing data, extracting insight and presenting findings to executive stakeholders

Problem statement

The team working on Smart Interview Live (SIL) had been operating for years without proper research.

There was a lack of clarity in their designs and they requested some budget to do research/testing.

We didn’t have a dedicated UX research team, but due to my scientific research / AI background, I was the go-to-person for research initiatives and was more than happy to lead and conduct this study.

I routinely trained other designers and onboarded one of our more junior designers as I thought this would be a good opportunity for them to learn from.

Who is the user

Individuals who conduct interviews with the purpose of recruiting.

These may include recruiters who interview candidates regularly or hiring managers / team leaders who rarely conduct interviews.

In terms of the target clients, SIL aims to appease to larger organisations that would like a standardised rating system and question bank to ensure objectivity and fairness in their interviewing process.

The Process


Gathering Requirements

Content Review


Stakeholder Interviews

I started off by understanding the product (Smart Interview Live) by examining the current live version and reviewed designs put together by the SIL team. This was important because it gave me an overview of all the content and features available. I created a simple flow chart to flush out the taxonomy, shown below:


Once I had a good understanding of the software, I initiated a series of interviews with key stakeholders to understand the knowledge gaps / key unknowns. I interviewed the following stakeholders:

  • Product Director

  • Product Manager

  • Product Owner

  • UX Designer

  • UI Designer

  • Solution Owner

  • Applied scientist

  • Content strategist

Creating Study Plan

Research Goals


Research Methods

Using stakeholder feedback, I was able to put together specific goals for the study. However limited resources wouldn’t allow us to address all of them with certainty so I prioritised the goals that were most relevant for this product at this point in time. I summarised the 3 global goals below in order of priority:

  1. Understand which of our 3 layouts is more usable to as we can develop only 1

  2. Understand interviewing logistics and journey

  3. Understand interviewer behavioural and personality attributes with respect to the candidate

I used tools like the prioritisation matrix (urgent/important) to club goals.


A. Moderated Usability testing to find out which of the 3 layouts to proceed with for development. (Highest priority goal 1)

  • I created some rapid prototypes from wireframes and did moderated testing with in house recruiters, as didn’t put any dent in the budget.

A. Semi-structured Interviews to focus on process and logistics

  • Interviews are good for finding out things we don’t know we don’t know.

  • Due to budget, I interviewed in house recruiters / interviewers but to minimise bias, I planned to ask similar questions to people outside the company in the survey.

B. Survey to focus on interviewer personality

  • I sent out a survey to external interviewers to focus on personality based questions as these weren’t the highest priority and needed a quantitative approach.

  • I planned the timelines such that I had conducted the moderated sessions in house before launching the survey to provide me with a steer on any specifics to focus on and validate the results from the interviews through the survey.

The Plan


Study A: Moderated testing for current design layouts (highest priority) + In house Interviews for process/logistics

Study B: Survey for personality based questions and validation of Study A

Recruiting Participants


Sources

Study A: We used internal employees who were sourced by my reaching out to HR and various other teams

Study B: We used an agency (Prolific) to find participants for the survey

Profiles

I adhered to the following profiles while recruiting participants. Study A participants primarily belonged to Profile 1 while B were more representative of Profile 2

Conducting Study A:

Moderated Testing + Interviews

I ran 8 moderated testing sessions, 4 in person, and 4 remotely across offices round the world. Each session was 1 hour.

I put together a structured discussion guide to ensure consistency and included various types of questions, some of which are shown below. I followed a chronological order to help participants (interviewers) structure their thoughts, i.e., how they set up their laptop, to interviewing their candidates, to writing reports…

I also added some space in my script between questions and wrote down all my notes on a printed copy as I conducted the sessions.

I initiated a role-play at the start of the interview to provide a static frame and exact more relevant answers in an efficient manner as opposed to asking hypotheticals.

...“Let’s do a little role-play. I am a candidate and you are interviewing me.

Walk me through everything you are doing in terms of your se...

Then after my questions, I took them through rapid prototypes for each of the 3 layouts and probed for feedback, finally asking them to pick one of the 3 and investigated ‘why’. The layouts are shown below:

My final question was to actually get users to draw their ‘ideal’ layout, using pen and paper.

Running moderated sessions + interviews


Analysing Study A:

Moderated Testing + Interviews

Transcribing to Figma

Now I would’ve loved to stay in the conference room armed with stickies but as I worked in a hybrid schedule, I couldn’t exactly carry the whiteboard home so I converted every single note to a virtual Figjam sticky and converted sketches to Figma as well.

I then scanned through all the feedback from each of the participants and wrote down overall findings.

The raw notes are shown on the right with each column representing 1 participant.

Almost all participants mentioned that they write a lot of notes and they drew a notes input box which was much larger than the designs.

I have put a side by side comparison of the wireframe design vs users drawn layout. (I converted their sketch to Figma, but they originally drew this on paper)

Solution

I came up with a smart solution to significantly increase the size of the notes box without compromising the space available for other elements. Check out the video below:

Key finding 2: Users want to see the scoring guide simultaneously

Users generally referred to the scoring guide while writing the notes. The previous UX designer had used a pop up to show the scoring so users couldn’t write notes or read the questions at the same time.

Solution

I came up with a better way to slide out the scoring guide that wouldn’t impede users in their ratings or note-taking activities. Check out the video below:

Affinity Mapping

After a few passes, I eventually ended up with a pretty little affinity map of all findings.

I had written detailed notes on printouts of my interview script for each participant. I even booked a conference room to make use of the massive tables we have.

Qualitative analysis


Collecting data

Key findings & Impact


I compressed the affinity map above to zoom in on the key findings and merged all the user sketches.
I’ll show what I did with the key findings as well. Keep reading!

Key finding 1: Need more space to write notes

Presenting to executive team


I created a short powerpoint deck to present these findings to executives that had been so eagerly waiting. I like the idea of using callout bubbles to emphasise what users thought about different parts of the interface.

The Product Management Director personally reached out to me to offer his complements on the quality of my research and on how quickly I delivered the results. I presented this deck exactly 9 days after receiving the initial research brief.

Risks


Low sample size

It is widely known that saturation is easily reached in qualitative studies. 5 participants can pick up 85% of usability issues according to Norman Neilson. Hence, feedback around the usability could be considered quite reliable with the 8 participants I tested with. However, the same does not apply to general research due to greater variability in experiences. Hence insights and themes drawn from interviewing may not accurate and would need a follow up / validation study.

Lack of diversity

All 8 participants were women and were all recruiters from my firm (SHL) which means that we have 3 common variables i.e., gender, job role & organisation that could not be investigated against. e.g., interviewers from another company may have a totally different way to conduct interviews.
Hence a follow up / validation study is required with the added diversity.

Next steps


Let’s assess where we are, so in terms of the high level goals, 1. has been successfully addressed. 2. and 3. are partially addressed but need to be validated.

Follow up study / Validation

I iterated survey questions that I had been preparing alongside conducting moderated testing to investigate the findings from study A with a much larger sample with added diversity.

Conducting Study B:

Survey

I studied up on the literature and prepared some 2 dozen psychometric questions. These formed the base of the survey however, I iterated quite a few times on the survey questions as I gathered more insight from study A to ensure that any insights drawn are relevant. I launched the survey on Microsoft forms and used 500 participants that I sourced from Prolific (agency). 5 of the 30 questions are shown below:

In your debriefs, do you usually let other more experienced interviewers make the decisions?

  • 5 point Likert scale (Regularly - Rarely)

Which of the options below best describes your behaviour in regard to conducting interviews with fixed scripts/guides?

  • 5 options (Not likert*)

    • I think they are great and I follow them verbatim

    • I follow the scripts but tend to paraphrase

    • I keep the scripts in mind but tend to branch out often

    • I like to have free-flowing conversations

    • I think scripts are limiting and am skeptical of the science behind them.

During the interview, do you let a candidate know if they gave a good answer?

  • 5 point Likert scale (No emotion - cheerful and emotional)

How friendly are you with a candidate?

  • 5 point Likert scale (Very strict - Very friendly)

Do you aim to assess a candidate’s technical knowhow or their behavioural/cultural fit?

  • 5 point Likert scale (only technical - only behavioural/cultural fit) of your se...

Creating Questions for survey


Analysing Study B:

Survey

Validation of Study A

An example insight from study A is shown in the image below:

  1. Gender - all participants in Study A were female whereas Study B had 50/50 ratios

  2. Company - all participants in Study A were from SHL whereas Study B participants were from 100s of different firms

  3. Role - all participants in Study A were recruiters whereas Study B participants had various different job roles

  4. Random chance - Study B only had 8 participants and the results could have been by chance i.e., statistical convergence hadn’t taken place yet

I did some further investigation to rule out some of the factors listed above. This involved reducing dimensionality of data i.e., converting a multivariate problem in a univariate problem or at least a lesser variate one hehe. An example is shown below of ruling out gender as a factor.

It can be seen in the figure above that the graphs for study B show similar distributions even after the data was filtered for gender. And there is still a rather stark difference when you compare the graphs of studies A (All females) and B (filtered for females).

Therefore, it can be concluded that Gender is NOT the reason for the disparity between Study A and B

A study in Terminology

A document with a list of questions to be asked during an interview can be called multiple things; interview guide, interview script, interview template etc.

These help the interviewer structure the interview and ensure necessary ground is covered. A screenshot of the overall response is shown below:

Let’s filter by region

I cut the data by region and recreated all graphs manually in excel and matched the format of Ms Forms for ease of interpretation. The distribution of terminologies in different regions is shown below.

Impact of the analysis via graphs above

  • The US group only uses terms: ‘interview guide’ and ‘interview outline’.

  • Had we used the most popular term ‘Interview template’, we would have been in big trouble in the US, which is one of our operating regions.

  • It is only with advanced ‘cuts’ that we could see the full picture and determine the most appropriate terminology.

  • Funny how the same data can lead to very different conclusions.

More analysis

You might think it’s only by chance that I happen to cut the terminology data by region to get the above insight. You are correct.

I contrasted the above insight with relevant data from study B.

I wrote about 30+ questions. Microsoft forms automatically creates graphs from the data which allowed me to do a quick overall analysis of the results. Fairly standard stuff. I got a lot of good insights in the first pass itself. I put a screenshot of the board below.

Basic quantitative analysis


First pass analysis

Potential reasons for the difference

But how could we possibly be sure that we got all the associated factors?

Well we can’t

Not unless we check everything against everything
...
So that’s exactly what I did :)

I extracted all the data into excel, and recreated all 35 graphs. Some of these were extremely complicated, for example, the screenshot below shows the calculation of only 1 of the 35 graphs. I also converted Likert data to numeric and then calculated a correlation matrix for each question.

Responses could potentially vary across elements such as gender, industry, experience, interviewing frequency etc.

Which would mean creating some 600 graphs...

It would be impossible for anyone to even fathom creating 600 graphs in 2 days.

Alas thankfully, my expertise in excel/data analysis allowed me to easily create dynamic graphs / charts which automatically rendered as pngs in batches of 30 hence we only needed to save 20 or so batches. I have added a screenshot of the board we used to identify trends and put down stickies.

Feel free to count the number of graphs. I would’ve done it all in Python or MATLAB but my goal was to teach this to other designers using tools they are familiar with, hence Excel was the obvious choice.

Finally, I pulled out all the key findings in an executive summary and created some supporting graphics. A screenshot is shown below.

How findings affect design

‘Interview guide’ will be the go forward term used in our platforms to refer to a set of interview questions

... There were quite a few findings, but they are a bit too specific to add to a case study ...

Training fellow UX Designer

Quantitative and basic statistical analysis like that needed in Study B are part of my area of expertise and one of my goals was to train another UX designer in the team to be able to conduct a similar analysis in my absence.

I began by having them observe my sessions and methodology, then proceeded to giving them small tasks that I had already done, such as identifying factors that might affect the answer to a question. I did the analysis on my own in excel and got them to observe me writing the formulas. I gave them in person walkthroughs as I did the tasks:

  • LOOKUP

  • COUNTIF

  • SUMIF

  • Nested IFs and Conditional statements

  • CORREL (Correlation coefficient between 2 numeric arrays)

  • etc

I then asked them to replicate the excel sheet that I had made from scratch, 1 graph at a time, and supported them until they could do it on their own.

I also spent time brainstorming on the impact that each finding could have on the designs. In the above case, it can be reasonably hypothesised that a large proportion of users will writes notes on paper hence I ideated a new feature that was added to the roadmap, one that would allow interviewers to upload a picture of their physical notepad, either through their webcam or a mobile app utility, instead of typing it digitally into the interface.

... And that is just the tip of the ice berg, keep scrolling!

How findings affect design

Advanced quantitative analysis


This is where my data skills really came into play. In the examples above, you saw hypothesisation, investigation and ideation, which contrasted differences between Study A and Study B. However there was quite a bit of untapped data in study B (survey) itself. Keep reading!

Outputs

Personas

We had sufficient insight from Studies A and B to craft highly detailed personas. We created multiple personas to account for differences in behaviour across dimensions e.g., between American & English participants, Males vs females, Juniors vs more experienced. Etc.

This was done to ensure representation of all potential users of our products and capture micro variations.

All in all, I created 4 personas (2M/2F); recruiter, Junior interviewer, American interviewer and English interviewer. I have attached the persona for the English interviewer below:

I had prepared an executive summary for each study, however, that was for non UX stakeholders.

I proceeded to create detailed documentation and additional UX elements such as personas and journey maps.

The result


All high level goals have now been fully addressed and the team finally has a path forward.

Thank you for reading :)

Contact me for more details

Update: Oct 2023:

Senior leadership liked my work so much that I received an award in the yearly ceremony and also received the opportunity to become the Lead Product Designer/Strategist for the entire suite of products that this project belonged to :)