Awards

I am proud to say that I was recognised for the value I added to this project in a global award ceremony at SHL. (1000+ employees)

My work was put on display in SHL’s head office in London.

Role & Duration

Role: UX Designer | SHL

Team size: 9
(UX: 4 , UI: 3, content: 1, strategy: 1)

Duration: 1 year
(Dec 2021 - Dec 2022)

Introduction


Responsibilities

  • Working with product managers to scope features and refine requirements

  • Redesigning information architecture and the general experience of the product

  • Conducting user research & creating personas

  • Creating and owning designs for various features, from sketches to hi-fi wireframes

  • Conducting usability testing on the updated features/patterns including moderated, unmoderated and A/B testing, analysing data, creating KPIs etc and owing the whole testing process

  • Working with UI and copywriter to establish platform wide visual and copy patterns

  • Redlining QA environments and working with QA to prioritise issues

Problem Statement

SHL was left with 2 admin platforms as a result of a merger; Legacy Talent Central (TC) and iAssess. Both had wildly varying UX and were used in conjunction as they did have quite a few mutually exclusive features.


As a result, SHL needed to maintain 2 sets of client accounts and records. Significant operational resources were dedicated to serving this redundancy, these included human resources in the 100s as well as tech resources.

Background

People are the most vital asset of any company and with millions getting hired every day, finding right candidates is crucial.

However, most hiring is subjective, inconsistent, and costly. That’s where SHL comes in. SHL provides assessments to efficiently & objectively evaluate talent.

The way to create, organise and deploy said assessments is through an enterprise software / admin platform. A one stop shop for all things Hiring.

Some of the clients using Talent Central+ are shown below:

Business Solution

The problem was solved by the inception of a new platform, one that would combine the cost efficiency of the younger platform with the comprehensive functionality of the older one.

This new platform was called Talent Central +

.

Who is the user

There are typically 2 types of primary users for this platform;

  1. Client recruiters and HR members

  2. SHL internal & support teams

These will be expanded upon in the Persona section

Scopes and Constraints

There were 3 main types of constraints

  1. Business constraints- These came as a result of business priorities in terms of timelines and resources, which directly factored into the roadmap & MVP.

  2. Client constraints- Some clients requested for specific features and were an attrition risk which needed to be taken into consideration.

  3. Work style constraints- One of the primary work style constraint was time-zone. The product team was spread across 3 countries: US, UK & India.

The Process

We based our approach on the industry standard double diamond process to create this humongous software. We worked using Agile methodology so there were half a dozen releases with various iterations of features and the design process was made to accommodate said iterations.


Discover

Content Audit

We started off with thorough internal research & examination of the 2 legacy software: iAssess and Talent Central. I created site maps to document the software and did a content audit combined with a feature audit to contrast the 2 and bring out any redundancies.


Legacy Site Maps

Once we had an overview of the content, we held workshops with the product team to categorise content into about 50+ features which were further categorised into work-streams to distribute the workload efficiently. The diagram below shows categorisation of some the main features and is not by any means an exhaustive list.

We did this to focus the designers and product managers as the amount of context required per feature was very high. For reference, 1 feature took an average of 5 weeks to design and this does not include all the iterations that resulted from interactions with other features.

To combat an endless stream of reiterations, we maintained daily sync ups to provide high level updates so each of us is aware of the others’ progress.

Feature Audit

Due to the massive scale of the project, we didn’t have very specific requirements for each feature, so I invested some time to conduct in-depth interviews with members from various teams to build detailed requirements of each feature in my work-stream.

Quite a few interviewees from the product team were able to provide specifics about how each feature was used but I wanted to interview end users as well. However, due to constraints, we didn’t have the option of interviewing client users and instead interviewed our customer support, deployment and HR teams, who served as a proxy for various levels of client users as our internal teams also used our software in their day to day roles.

Since we were still at a preliminary stage, I chose a 90% confidence interval and 10 point margin to calculate a rough sample size using the formula shown on the right to double check the validity. This came to a modest size of 7.

Stakeholder interviews and SUS

The primary goal of the interviews was to build clear and detailed requirements which I did, however they were far too specific and do not add meaning to this case study so instead I’ve put down some high level statistics which were collected second hand.

Findings

I’ve put the SUS score on the graph on the right for comparison. It was quite unfortunate but the legacy software were in score band F, and our own observations combined with any qualitative feedback supported this rating. The silver lining was that there were loads of areas upon which we could improve.

For the MVP, I had set a SUS target of 60. That would have been a 50% boost from the initial score of 40. This was a modest target, and the main reason for this is that the platform is humongous and would take at least 3 releases over 2 years to revisit every single screen. I will do a comparison with the actual scores for the MVP once I get access to users.

Research


Reference for Sample size calculations

Define

Revised problem statement

We validated the problem statement while conducting research and we were able to further refine the statement to add more detail to it. i.e., additions such as improved IA, more usable flows and better copy. These issues don’t necessarily undermine the original problem and act as additional improvements instead.


There are quite a few people that interact with SHL products and we broke them down into 3 categories.

  1. SHL internal users

    • These are internal teams that use the admin tools to provision products, assist clients with setting up projects, manage client accounts etc.

  2. Client users

    • These are users that use the admin tool to create hiring drives (project) which include: administering tests to participants, evaluating responses, short list etc...

    • There are quite a few different levels of client users with different permissions, for example, a recruiter may administer tests but will not have access to participant responses.

  3. Participants

    • These are the individuals that actually do the assessments and interact with our assessments which are hosted on an entirely different platform from the one concerned in this portfolio.

Personas


We already had legacy personas but they were out of date & not very easy to comprehend for parties outside the design team so we decided to create around 7 new personas. We split them among the whole team and I chose to create that of a HR leader and is shown on the right.

After conducting 4 semi-structured interviews with different HR leaders in house and researching job descriptions of HR leaders across levels (Chief HR Officer, HR director etc), I created a persona that was simple & easy to understand. The video below shows me doing some card sorting for the same.

Legacy Task flows


As a result of the research, I was able to put together a set of task flows that show how the old systems were being used. I’ve put 2 of the basic ones below as an example.

Design

Information Architecture

The first step was creating an improved information architecture for TC+ shown in the diagram below. We used typical UX methods like open ended card sorting and had a multitude of discussions with technical teams, product managers and strategists. We managed to revamp a significant chunk of the IA in the first release however it is impossible to understand the difference without context so let’s focus on bullet 2 in the revised problem statement shown on the right


Site Map

After iterating a couple of times we managed to create a combined & improved sitemap shown below.


User Flows

We picked individual item/features in our respective work-streams from the roadmap and worked with the product team to create user flows for each feature. This typically involved some light research.

We had to adjust our level of detail and format of flow based upon the time constraints, priority of the feature and the working styles of the Product Manager in charge of the feature. Doing this exercise with the product managers enabled us to solidify and flush out any missing requirements but the main goal was to establish better cross functional relationships.

Shown below are some flows I made as part of the features I was working on to capture a user creating a new project. I’ve contrasted the legacy flow vs the new flow in the same diagram.


Lo-Fi Wireframes + MoSCoW

I also curated a MoSCoW list to efficiently prioritise items based on a refined set of requirements. The MoSCoW diagrams shown below are just a brief summary for demonstration purposes, we wrote the actual list in Jira which was exhaustive and far more technical to say the least.

I created wireframes in Figma using the user flows as a reference. The screens shown below are part of the happy path of creating a project (see user flow above). I created such wireframe flows for every single path to flush out any edge states.

Note: the wires below are NOT initial wireframes and had already been iterated upon and were the penultimate wires. Unfortunately I couldn’t find the original LoFi wires I had made last year.


Iterations

We had quite a few iterations as tech and product reviews progressed. Each time we would reach a stable point, I would write copy for the screen and work with the copywriters to ensure consistency across our products. The flow below shows some changes that we had to do due to tech constraints and some new requirements from product. The changes are marked in orange.


Taxonomy

We worked with the content strategist to curate and fix any and all issues with taxonomy that arose at various points in the design process and we found quite a few.
Parts of the taxonomy are shown in the diagram to the right. These contrast the same entity across different platforms. We updated them to ensure names hold their truest grammatical meaning and that hierarchy was maintained.

We also considered interpretability from a users perspective and made changes such as renaming ‘project groups’ to ‘participant groups’ as more users are familiar with participants as opposed to only expert users who would be familiar with projects.


Master Components / Mid-Fi wireframing

I worked with the UI team to create an exhaustive set of master-components based on all the different wireframes I developed. The colour schemes and graphical elements were based off one of the legacy platform to ease migration. The diagram below shows the wireframes with the applied stylisations and I put in sample data as well, effectively transitioning the wireframes to mid fidelity designs. Note: these are not the final designs.


Common sub-features across workstreams

As all the designers were working in their respective work-streams, I noticed some common sub-features that were going to be part of features in other work-streams, hence I decided to take ownership of these sub-features across different work-streams to standardise their behaviour.


These common features needed to function in tandem as they would eventually contribute to the formation of a user’s mental model. I worked with the content strategist to set up a copy style and worked with UI designers to build universal patterns at the same time ensuring that the designs met accessibility and localisation standards. Once I had been through every possible use case and eventuality, I shared the components with the rest of the UX team so that they could incorporate the components in their respective work-streams.

Prototyping

I then build a dozen or more prototypes based on high fidelity wireframes to demonstrate the designs to senior stakeholders and to use for usability testing. See usability testing section. 3 prototypes are shown below.


This is the primary flow discussed in the earlier sections

Creating a project

This is part of the common sub-features mentioned in the section above.

Bulk Upload

This is part of the common sub-features mentioned in the section above.

Filters

Improvements in Experience

The first improvement was the overall site map as shown in the diagram below. We designed a much more intuitive site map that was based upon a more scalable and improved IA as demonstrated in earlier sections of this portfolio.


Another major improvement was the project creation journey. This process was broken down in bite sized steps to aid intuition and mimicked existing mental models of the users such as the ‘product tray’ being synonymous of a ‘shopping cart’ in popular e-commerce sites. This example is shown below. We made similar upgrades throughout the entire platform, greatly reducing the cognitive load and improving the overall experience.

Usability Testing

I then conducted moderated, unmoderated and A/B usability tests on platforms such as UserzoomGO as well as in-person.


We conducted moderated testing sessions or rather cognitive walk-throughs with a limited set of expert users that included some of the individuals we conducted our initial research interviews to validate the flows and ensure we were addressing any/all critical issues. We wrote a test script and conducted 6 x 1hr sessions primarily based on the project creation workflow as that was significantly different from either of the 2 legacy software. We had a myriad of findings however these were primarily technical in nature and pertained to the feasibility of the features that we implemented.

The results from this were positive. 6/6 testers preferred our new flow and the results worked as a good proof of concept and gave us actionable information about what features would work and what wouldn’t.

Moderated Testing

I divided my features into bite sized chunks to be tested with 6 testers per round for 4 rounds so 24 individual tests. I conducted multiple sets of such tests over the course of the year for different work-streams and platforms as well.

I targeted a completion time of 5-10 minutes to avoid contributor fatigue. I picked the 9 most challenging tasks for the 1st round and created prompts, questions, hypothesis and prototypes for each task and documented them all in a testing plan. I also created preamble & scenarios to provide context to the tester. I then set up an unmoderated test in UserzoomGO and chose testers who were the closest demographic match to my target persona

Unmoderated Testing

Analysing the results was a lot of fun and allowed me to utilise all my engineering/data analytics training, I analysed all quantitative results in Excel and plotted key charts. I would’ve used MATLAB however that would have been inefficient for such a simple and simple dataset.

The testing times are shown below for each tester in round 1 against the 9 tasks. I set an acceptance criteria: >= 5/6 which meant that the main areas of concern were the first 2 tasks where the scores were 3/6 and 4/6 respectively.

One of the questions I asked the testers after each task was whether they thought they completed the task successfully, I did this to check how well the testers were understanding the prompts. I.e., if there is a significant delta between tests actually passed vs tests that the users thought they passed, then there is likely a problem in the prompt. This is shown below for task 1.

Let’s examine task 1 in more detail.

The goal was to have them click on the ‘Company settings’ menu item in the header to find company users but half of the testers ended up clicking on ‘People’ in the left hand navigation menu which leads to a list of results of individual participants. This was a taxonomy error as participants and company users are both people but the left nav menu item ‘people’ only leads to ‘participants’. This was solved by changing the left hand nav item from ‘people’ to ‘participants’. I did another test to validate this.

As for the disparity in the interpretation of completion of the task, the testers just assumed that they did the task successfully as they assumed they needed to talk out loud what they would do instead of actually attempting to do something on the screen. I solved this by updating the prompt and retested this as well. The prompts are shown in the slide below.

I made similar changes in task 2 and pushed out another test. I replaced tasks 3-9 with different tasks as they didn’t require further validation.

See diagram below, red = failed, green = passed

The changes I made resulted in a perfect success rate for each task. I then presented my findings to the larger team and convinced them to implement my proposed changes across all work-streams.

Key observation: I realised that the 3 point post task question I used to measure difficulty was too coarse as most people rated all the tasks as moderately difficult.

Due to the lack of in house UI resource, leadership decided to purchase a set of master components from a UI agency that would envelop not only this admin platform but would be applied to every single product that SHL has. As this was a major UI change, I along with other UX designers were constantly involved in reviews to ensure that all the previous patterns that we established over the year are maintained and that any of the UI changes do not affect the UX.

I used this opportunity to conduct prelaunch* unmoderated A/B tests on some key platform wide patterns and behaviours in the new style. I conducted a test with 60 testers, so 30 per test. I wanted a confidence level of at least 90% with a 15% margin of error for binary metrics and 20% for continuous as a minimum baseline.

I wrote down a set of hypothesis and created tasks, prototypes and selected key performance indicators (KPI) based off the specific tasks. I corrected the post task question issue that I had in the unmoderated testing I did earlier in the year and used the Single Ease Question (SEQ) instead to have more sensitivity as it was based on a 7 point scale. I have elaborated on task 5/9 below.

A/B Testing

I created an excel sheet to document and analyse the results of the test and calculate all relevant metrics. For this task, I created a performance metric P to contrast test A and test B.

According to the finding above, we ensured that all forms longer than a particular length would always be displayed in a 1 column layout. We repeated such audits and exercises wherever possible to incorporate maximum changes in light of the findings.

Once we had a consensus from all members of the design team and product owners, we proceeded to present the findings and mid-fi wireframes to senior management.

Delivery

High Fidelity Wireframes

Once we received final approval from senior management (had to tweak a few things here and there), I worked with UI to create high fidelity designs and documented/uploaded links for each screen on confluence for ease of access for developers. I supported any ambiguous screens with annotations and prototypes as well. Shown below is a screenshot of deliverable for a feature I created in Q4/2022.


QA support

After the initial grooming sessions, I took part in demos to answer any queries and help product owners prioritise between features/considerations.

We thoroughly went through the release 1 which was an internal release 10 months into the project. We held quite a few sessions with QA to train them on what to look for and how to prioritise design elements. We also held training sessions on Figma. My role in this was to red-line items via the QA env and create training material.


Launch

Migration Plan

The migration was planned as shown in the Journeys below. There were 2 main phases. The explore phase and the should phase. This was designed to form an incremental release plan as different clients had different levels of inertia and the aim was to gradually transition and train them as opposed to immediately upgrading the system and forcing them to learn the new system. Note this phase was led by the strategy director and my role involved manifesting the business plan into actionable steps from a UX perspective and creating screens of course.


I worked with the UI team and Marketing teams to create screens and widgets to introduce the upgrade in the existing platform. some of these are shown below.

Benchmarking

I had put in a request to conduct SUS surveys with our users as I wanted to establish long term usability tracking practices in the firm, so that we could compare the general changes once Talent Central+ had been adopted however there was quite a bit of red tape around client access. This was one of the reasons I was advocating for SUS as it would be inexpensive to administer and would a good starting point upon which we can build in the future. I will update the graph below once I am able to gain access to users.


Next steps & Retrospective

There was a massive backlog that we had to pick up as development and migration on this platform would need another 12-18 months to be anywhere near complete, however the MVP was successfully launched and adopted. The backlog items were typically more back end in nature, i.e., the screens wouldn’t be seen by client users but would be used by our internal teams to provision products, manage accounts etc.

I learnt so much from this project. Suffice it to say, it was a massive project that lasted over a year and I had a lot of fun working cross functionally with so many different divisions of the company. I learnt a great deal about the importance of testing and documenting processes which may seem trivial in a small project but are crucial when working in a larger team where projects get handed over constantly between different designers. Communication must also be extremely clear such that all the teams outside the product division can understand any of our deliverables. To this end, I created a template for annotations that was widely adopted and very well received by external stakeholders.

I would like to thank SHL once again for recognising my contribution regarding this project.


Thank you for reading my case study :)

Please contact me for any enquiries