How does your manager evaluate technical expertise? (part 2)


Reader,

In Part 1, we talked about Jake, one of the best recruiters I've ever worked with as a candidate. He was great, but ultimately had a "miss" in his hiring practice which cost his team time.

Jake spent hours sourcing and interviewing candidates for this role, only to see every finalist rejected because they failed the assessment.

At first, Jake thought he was just sourcing the wrong candidates.

When we took time to compare the assessment to the actual job requirements, the misalignment was obvious.

Now he’s back at square one, working with the hiring manager to fix the assessment and he’s already lost strong candidates who would’ve likely succeeded in the role.

How did I know other candidates had failed this assessment?

Jake told me about it.

There was a candidate for a Staff QA Engineer position -- I was going for Senior QA -- who, in retrospect, seemed to encounter the same problems I did with the tech assessment.

Jake's story about this other candidate began to sound like my story...

I was another data point for Jake.
A competent QA engineer who got tripped up by a misaligned LeetCode assessment.

I spent extra time walking him through the assessment so he could see where the process missed the mark. I’m sure he’s wishing he’d reviewed this earlier with prior candidates.

If you're the technical recruiter and you're getting green flags for your candidate EXCEPT for the technical assessment, you may want to evaluate it.

Is your tech assessment serving you?

The req in question asked for these skills as the "dealbreakers". Nowhere in the req did it mention the need to write complex algorithms (this is important, as we'll soon see).

To obscure the name of the company, I've fed the req into Gemini and asked it to condense the most essential skills into a brief bulleted list:

Their Assessment

I haven't really told you much about their tech assessment yet, so let's dive into that first.

Part 1 - "...but can they code?"

The Leetcode-style problem they gave me was "find the next largest palindrome".

A quick AI query will tell you this was not an easy problem.

This was a poorly designed test for coding ability, mostly because it was a Hard-level Leetcode problem.
If you want to know if someone can write code, picking an easier problem has zero downsides.

I didn't get anywhere close to solving it -- all I managed to do was create an "isPalindrome" function.

Not only are QA folks not used to writing convoluted algorithms, it's also not part of their job to write this kind of code.

They still gave me 4 out of 4 points because I confirmed I could write functional TypeScript code, though.

Meh.

Part 2 - the test results aggregation/summary algorithm

After that trainwreck, they gave me a problem that I admit was MUCH more relevant to QA Automation work.

The only problem is this role did not need someone to create their own reporter from scratch.

Since the team uses Playwright, and no custom reporters were needed, the person they hired would never need to modify reporter code.

So...why have the candidate prove that they can take test results JSON and manipulate it into a report on how many tests were flaky?

Seems like a waste of everyone's time. There are much better ways to assess QA Automation skills.

Like what? Glad you asked 😁

A better assessment for this position would've been:

  • Give the candidate a bug report and ask them to write a Playwright test that would have caught that bug. Have them talk through their solution before writing any code, and then explain the code they wrote
  • Give the candidate a feature and have them talk through what they would test at the E2E, API/integration, and unit layers. Ask them to explain why. Perhaps have them write an E2E or API test or both, if you can provide access to a dummy API or demo app.
  • Have a QA Engineer on the call to help assess code quality, and ask the candidate to write an Appium script for a mobile app feature. Provide the necessary XML for the candidate to make correct locators for the UI elements. Ask them to explain why they chose certain locator strategies or assertions, and any other scripting choices they made.

Regarding that last one, I've never seen it done in the real world because running automated tests against mobile devices requires a lot of technical setup that you can't reasonably ask candidates to do.

That said, with the right expertise in the room, you don't need to run the test code in order to get good signal on whether or not the candidate knows how to use Appium.

How could the recruiter help here?

Let's assume that Jake was aware of the problem with the tech assessment (or at least suspects a problem).

What can he do? What if he doesn't know what a good technical assessment looks like?

  • Talk to an engineer on the team. The IC's have a handle in hiring but were not part of the live coding challenge. Ask them their thoughts on the challenge and compare it to the requirement
  • Do your research. Go google these technical concepts and educate yourself, or leverage AI for quick notes. It’s easier than ever to leverage AI to build a specific engineer GPT that helps you have stronger, more informed conversations with technical leaders based on the specifics of the role you're hiring for
  • Spend time with your candidates. If Jake had gone deeper with the last person he interviewed, he likely would’ve caught the process issues sooner. By the time I came up to bat, things could’ve been aligned and I might have made it through. I mean, I got thumbs up across the board except for the LeetCode challenge, which knocked me out.

Epilogue

Yes, this is a hiring story, and stories sometimes have epilogues. In this case, a week after I got rejected from this company, I got hired at Arine, my new gig.

But I had given Jake my feedback and he had apparently shared it with the team. He called to give me an update.

Jake told me the QA Director had taken it to heart and reportedly, they were already in the process of revamping the technical assessment.

Who knew? Sometimes decision-makers respect the data and respond to candidate feedback.

Next week, we'll dive into another hiring disaster and how technical recruiters can keep their clients from falling into a "fool's choice" scenario.

Seeya then,

Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, Hey again. It's Steven, the QA Engineer who thinks he can talk about recruiting (I kid, that's what Jaclyn's here for lmao). Story time. While on the job hunt -- between November and January -- I interviewed with a productivity SaaS I shall not name. You'd know who they are, probably (maybe not, some folks I talked to hadn't heard of them). Anyway. I got all the way to the end of their interview process but flunked their Leetcode challenge. I almost wrote about this interview...

Reader, In Part 1, we talked about a bad tech assessment. Let's see how it could be better, 1 skill at a time. Recall that this was the list of skills they needed to assess for the role: is screencapping our own newsletter considered "newsletter-ception"? QA Automation (Playwright & POM) These fall under the QA Automation umbrella in the 1st item. Like I said in Part 1, they did a good job of testing the candidate on Playwright best practices. But they can do better. Instead of asking the...

Reader, Steven here. So, let's talk tech assessments. I was recently on the hunt for a new job thanks to my previous employer making some questionable business decisions in 2025. One of the first interviews I got was for a "field service management SaaS" company. AKA, they serve landscaper business owners who need help managing their operations. (No names 😉) But you're here because you want to know what their tech assessment looked like. It was a 2-parter: 1. Playwright expertise (refactor a...