Assessment Centre Scoring

Published by Scott Jenkins on

Assessment Scoring

Recently, I formed part of the Dunelm assessment team at our Digital Apprenticeship Assessment Day. Our aim was to recruit the ‘best’ candidates into the role, which will see the 4 successful students studying for degrees in retail alongside working rotations across marketing and trading. In this post, I’ll write about my experience of the day, and share some thoughts on fair assessment: How to establish and define what ‘best’ means in a recruiting context, and how to measure candidates against this definition.

Digital Marketing Sale Activity

On the day of the assessment centre, I was responsible for leading the Digital Marketing Activity. I had been planning and preparing the task in the previous few weeks, so was excited to deliver the session and see how the candidates reacted to it. Split into groups of 5, the 3 groups rotated around activities (a trading task and interviews) before merging for a final task in the afternoon. I was impressed with the composure and maturity demonstrated by some of the group; all of whom were in the final year of their A-Levels.

My 60 minute session can be outlined as follows:

  • First 15 minutes:
    • Brief introductions of myself and of digital marketing.
    • Top level overview of the aims and approaches of SEO, PPC, CRM and Affiliates.
    • Digital Marketing task briefed: using resources provided to plan for a summer sale.
  • Next 30 minutes:
    • Group working on task.
    • Interruptions / News Flash of information aimed to test adaptation to change.
  • Final 15 minutes:
    • Group present back ideas and strategies to assessment team.
    • Discussion of activity challenges/approach.
Digital marketing sale strategy
Planning a summer sale

It was pleasing that the activity was well received by both candidates and the resourcing team. Designed to stretch even the most competent of groups with the diversity of task presented; the additional information provided at intervals throughout is a good test of how the group respond to changing circumstances. It is also a good test of teamwork, delegation, and presenting. There is also opportunity to run this activity as an introduction to digital marketing with other teams in the business at an away day or team building workshop.

For the remainder of this post, I’m going to discuss ideas to ensure fair assessment, all centred around a numerical method. To add clarity, I’ll work through an example taken from a digital marketing context, supposing that I was recruiting for an SEO role.

Setting up a Fair Assessment

Before candidates start the process (and the job advert is even posted), an assessment team must be clear what role is being recruited for, the skills required to be effective in this role, and the relative importance of these skills to each other. Which traits are essential? Which are nice to have?

Beginning with a list of skills or attributes that would be desirable in suitable candidates, we then score each on its importance, in this case an integer between 1 and 5 inclusive. The table below demonstrates this for our SEO example. Higher scores signify greater importance for the role.

SEO Attributes
SEO Attributes

In this example, we see from the table that I’m after someone who can learn quickly and adapt to change in the uncertainty of an SEO environment. I’m also putting emphasis on a logical approach, which is helpful for identifying opportunities from organic data. In this example role, communication skills, understanding of digital marketing and interest in web design are less important. This table can of course be adapted for any role.

Towards Task

Now we have identified the skills and competencies we value for the role we are recruiting for, we need to plan and prepare our methods of assessment. Typical assessment day activities include an interview, a presentation, a task related to the role or a group activity. When preparing the tasks, we need to ensure that collectively, they will test candidates on the skills we have listed in our competency table above. Our assessment must be aligned to the attributes in the table above, else a ‘successful’ candidate in this context will not necessarily be a good fit for the role.

Suppose we decide that we will run our SEO assessment day to include the following tasks:

  1. Interview: the staple of most recruitment processes.
  2. Blog Content: Analysing search volume data together with existing blog performance to propose and draft a short piece of blog content.
  3. Sale Planning: Given data from across other digital marketing channels, produce an 8 week SEO strategy as Dunelm approaches summer sale.
  4. Individual Presentation: Give each candidate a subject on the day, on topics as diverse as the Spanish civil war, Space elevators, Synchronised swimming and Sweet Making. Allow 30 minutes for research, then give 5 minutes to present on their topic.
  5. Competitor Analysis: Research our competitors and write up observations around given questions.

For each of the traits in our SEO skills table, we need to understand how well each of the above tasks will evaluate performance in that skill. Again, we will use a numerical scale of between 1 and 5, 5 indicating that a task is an excellent test of a given skill. Putting this together, I have reached the following grid.

Task Scoring
Task Scoring

Sense-checking here, we see that the ‘Individual Presentation’ task has scored 5 for ‘Learn Quickly’ and 4 for ‘Strong written and verbal communication’. This makes sense, reading the description above, the task will require candidates to learn quickly about an unfamiliar topic, and then present back their findings. Notice also that the ‘Interview’ is the only task in our example which tests a candidate’s ability to be a ‘Strong relationship builder’.

With our scoring in place, we can weight each task as follows. Taking the sum of the products of the importance of each skill and how well the task indicates success in the skill. As a mathematical aside, considering the two above tables as matrices, we’re taking the product of our task table to the left of the skill table.

The Interview task score for example:

= (3*4) + (3*1) + (3*5) + (0*4) + (5*3) + (4*2) + (3*1)

= 12 + 3 + 15 + 0 + 15 + 8 + 3

= 56         

Pulling this together, we reach the following scores for each task. This is a measure of how much performance in each task will influence a candidate’s success.

Task Total Scores
Task Scores

From the table, we see that the Interview and Sale Planning tasks carry the most weight, whilst strong performance in the Competitor Analysis task bears the least indication of success in the SEO role we are recruiting for.

Assessment Scoring

The final piece of this assessment scoring method occurs during the assessment day itself. For each task, candidates are assigned a score for their performance. Again, let’s use a scale of 1 to 5, 5 representing the most competent. Let’s tabulate these values.

Candidate Assessment Scores
Candidate Scores

Who would you hire based on this information?

Reuben has scored highly in 3 of the activities and has been given 5’s in both the Blog and Analysis tasks. Naomi has been consistent but failed to score 5 in any tasks. Luke had a great interview but scored poorly in all other activities.  Adding up the scores for each candidate, we see that in this scoring framework, Naomi scores highest, followed by Reuben and Zara, with Luke trailing several points back.

Table of summed scores
Simple Summing Scores

Instead, let’s see how the candidates fare taking our previously calculated task scoring into account. We score each candidate using the same method as when calculating task scores: taking the sum of the products of task weighting and the candidates score in the task. Again, from a mathematical viewpoint we’re using the matrix product once more.

For example, calculating Luke’s score:

= (5*56) + (1*29) + (2*49) + (1*33) + (2*11)

= 280 + 29 + 98 + 33 +22

= 462

Pulling this together for all candidates, we reach the following scores.

Table showing improved weighted sum scores for the assessment
Alternatively Weighted Scores

We see that Naomi is still ahead in this scoring system, but notice that the rankings of Luke and Reuben have been reversed with Luke now in second place. With the simple summing approach above we would have written Luke off, but in this weighted example Luke is strong favourite behind Naomi.

The benefit of the weighted method ensures that each score signifies performance more accurately aligned to our recruiting aims. We have produced a fairer and more accurate measure to guide us to the best candidates.  I suggest that this framework is only a guide, however. The calculations are helpful when set up properly, but it is often worth further discussion to check the numbers match with overall assessor impressions, especially when several candidates have very similar final scores.

Other Considerations

In something as complex as recruiting it is rarely a good idea worship an algorithm wholeheartedly. Data and numbers help inform decisions, but we should first consider both their validity and any additional information or factors not captured by the model. A couple of considerations in our SEO example could include:

  • Candidate Demeanour and fit with culture / company values. None of our tasks explicitly assess this, yet agreeableness is important for all but the most independent and detached roles.
  • Differing candidate’s schedules. The order in which they sat the tasks may affect their nerves and preparation. Candidates may struggle going straight into an interview at the beginning of the day and may have performed with more confidence if they had completed a group activity first. For related tasks, candidates may benefit from knowledge gained earlier on the in the day. How can assessors take this into account?
The maze of recruitment
Recruiting is not a simple process

Another challenge arises in the subjectivity of different assessors and interviewers. We can calibrate scores across tasks, using unusually high or low average scores to raise discussions of overly harsh or generous marking.

Interviews conducted by pairs of assessors may be considered fairer than those with a single interviewer and once again, we can calibrate scores if we suspect that some interviewers have tougher marking criteria than others. Suppose in the example above for instance that Interviewer A interviewed Luke and Naomi (scoring 5 and 4 respectively) while interviewer B interviewed Zara and Reuben (scoring 2 and 1 respectively). We could challenge if the candidates really interviewed that differently, or whether interviewer A was simply more generous with their marks. Prior to the assessment day, we could compare scores different interviewers give to several pre-recorded acted footage and adjust accordingly.

A final point of consideration, which I was careful to abide by was not talk to other assessors about candidates who were yet to take part in my activity. This ‘blindness’ of assessors helps alleviate confirmation bias. If we had each been fed information about a candidate’s performance in earlier tasks / telephone interview / CV etc. then this could have prejudiced our scoring.

Wrap Up

A big cheer to the resourcing team for hosting such a well organised assessment day and my thanks to them for giving me the opportunity to plan and lead the digital marketing activity: I learnt a lot and being on the other side of the recruitment process was an enjoyable experience. I look forward to contributing to different assessment efforts in the future.

Until next time,

Scott

Categories: Recruitment