78% of experienced programmers can't code this problem simpler than FizzBuzz

TLDR: Most coding tests are needlessly difficult. Our analysis on results from tens of thousands of coding tests shows that more difficult questions on coding tests are less effective in screening candidates for technical roles.


Anyone remotely involved in recruiting has their own horror story about candidates with perfect resumes being invited to interviews, only for the hiring manager to figure out that they can't code. The Spray and Pray application process deployed by candidates leads to each corporate job opening attracting 250 resumes on average. This forces companies to use an automated screening process to avoid spending countless hours interviewing candidates.

A FizzBuzz style screening is critical to prevent hiring managers from wasting their time interviewing candidates who can't code.

The problem with coding tests today

Traditional assessment tools use coding puzzles and niche algorithm questions for the screening, which creates a lot of angst among programmers about having to go through irrelevant screening assessments.


While it's great if someone's good at puzzles, this is not a strong indicator of how good of an engineer someone is/ how good they're going to be in the role (for most roles). Also, this way of evaluating developer skills has an inherent bias against more experienced developers, and developers from non-CS backgrounds.

What we're trying to do differently

The main reason we started Adaface is that traditional tech assessment platforms are not a fair way for companies to evaluate engineers. We started Adaface to help companies find great engineers by assessing on-the-job skills required for a role.

We deploy subject matter experts to go over a customer's job description to understand the ideal candidate persona and select the most relevant questions from our database to be included in the assessment.

Over the last few years, several enterprises globally have switched to Adaface, and we've had a chance to take a closer look at whether the approach works. We intend to share our learnings with the community, to help us uncover insights better and fast-track our progress towards the goal of building candidate friendly assessments.

One of the experiments we had at Adaface was to test if it makes sense to include easier or more difficult coding questions in a coding test (in a 45 min assessment, you can include only 1 question so doing a mix of both is not an option and we're not keen on 120 min assessments).

One of our favorite questions to include in coding tests is a very straight-forward question which requires the candidate to use just an array and dictionary/ map. When our team created this question, it was categorized under "Easy" (as the question is attempted by candidates and more data points come in, the difficulty level is updated automatically). Now that this question has been attempted by a significant number of candidates, we can share some results. Here goes:

The setup

Question

Since this question is currently being used by some of our clients, we can't reveal the exact question. However, here's what one would need to do to solve the question:

  • Parse a string array with duplicate values
  • Create a dictionary to keep count of number of times each value appeared
  • Return key with highest value.

This should give you a clear picture of what the question would be like, and the difficulty level of the question.

Coding environment

  • The question is explained in straight-forward language which can be understood by a non-native English speaker.
  • The questions also includes 2 examples with expected outputs for additional clarity.
  • The set up includes error-free boilerplate code that parses the input, and prints a sample output to the screen.
  • The candidate can choose to solve it in any programming language.
  • The code editor has documentation embedded for all programming languages, so they can look up any function, along with code examples.
  • All edge cases (what if the input is null? what if there is a tie?) are clearly explained.
  • The question is non-googleable. As in, if you google the question you won't find the solution online.
  • The candidate has 25 minutes to solve the problem. When they have 5 minutes left, they can add a buffer time of 5 minutes to complete the problem, for a small score penalty.
  • If the candidate is stuck, they can take a hint for a small score penalty. A sample hint would be something like "Your current code won't work for a test-case where the input array contains only 1 element."

Candidate pool

These candidates are not truck drivers or plumbers without any programming experience who applied for the role without reading the job description. These are pre-screened engineers who have 2+ years of experience, and who have applied for roles at companies like Morgan Stanley, Amazon, PayPal, the Singapore government, and the likes.

Candidate score distribution

  • 59% scored 0
  • 19% scored between 0 and 50% of the score
  • only 13% solved it with a perfect score

Why asking difficult questions on a coding test does not make sense

  1. Programming under pressure is difficult
  2. Programming in an unfamiliar environment is difficult
  3. Most candidates won't be able to do the easy question anyway, so it serves as a better filter
  4. A lot of engineering managers think hard questions == niche algorithms/ CS puzzles
  5. It is difficult to come up with hard questions yourself, so most hiring managers use questions off the internet, which are googleable and make the test irrelevant

FAQ

But this dummy question is not truly reflective of on-the-job skills and does not prove whether they can do the job?

You're right, it doesn't.

The idea behind the coding test is to just filter out candidates who are definitely unqualified, which this question does a great job at. Ideally you want to assess their ability to write clean, maintainable code with a simple design, work collaboratively with the team to arrive at solutions to difficult problems etc. But that's not the coding test's job.

Before setting up a 2 hour interview with a candidate you want to know if the candidate can write code. That is the job of a coding test.

I think a take home assignment is better at evaluating candidate skills.

Again, you're right.

A take home assignment can do a much better job of capturing what the role will be like, so the evaluation will be better. However, most companies that use take home assignments report a completion rate of 30% (since they're too long).

If you're seeing better completion rates, you should probably stick to using a take home assignment. For the others- candidates are busy, and if you're adamant about using long assignments as the first step of your hiring process you might lose out on good candidates.

What does this mean?

An easy coding question is a great way to filter out unqualified candidates who can't code. Nothing difficult- just a simple coding exercise to go through the motions of writing code. That way you can make sure the coding test is short (~45 to 60 mins) and doesn't drive away your candidates.

Looking to screen candidates for your tech roles? Check out Adaface.