How does your manager evaluate technical expertise? (part 2)


Reader,

In Part 1, we talked about Jake, one of the best recruiters I've ever worked with as a candidate. He was great, but ultimately had a "miss" in his hiring practice which cost his team time.

Jake spent hours sourcing and interviewing candidates for this role, only to see every finalist rejected because they failed the assessment.

At first, Jake thought he was just sourcing the wrong candidates.

When we took time to compare the assessment to the actual job requirements, the misalignment was obvious.

Now he’s back at square one, working with the hiring manager to fix the assessment and he’s already lost strong candidates who would’ve likely succeeded in the role.

How did I know other candidates had failed this assessment?

Jake told me about it.

There was a candidate for a Staff QA Engineer position -- I was going for Senior QA -- who, in retrospect, seemed to encounter the same problems I did with the tech assessment.

Jake's story about this other candidate began to sound like my story...

I was another data point for Jake.
A competent QA engineer who got tripped up by a misaligned LeetCode assessment.

I spent extra time walking him through the assessment so he could see where the process missed the mark. I’m sure he’s wishing he’d reviewed this earlier with prior candidates.

If you're the technical recruiter and you're getting green flags for your candidate EXCEPT for the technical assessment, you may want to evaluate it.

Is your tech assessment serving you?

The req in question asked for these skills as the "dealbreakers". Nowhere in the req did it mention the need to write complex algorithms (this is important, as we'll soon see).

To obscure the name of the company, I've fed the req into Gemini and asked it to condense the most essential skills into a brief bulleted list:

Their Assessment

I haven't really told you much about their tech assessment yet, so let's dive into that first.

Part 1 - "...but can they code?"

The Leetcode-style problem they gave me was "find the next largest palindrome".

A quick AI query will tell you this was not an easy problem.

This was a poorly designed test for coding ability, mostly because it was a Hard-level Leetcode problem.
If you want to know if someone can write code, picking an easier problem has zero downsides.

I didn't get anywhere close to solving it -- all I managed to do was create an "isPalindrome" function.

Not only are QA folks not used to writing convoluted algorithms, it's also not part of their job to write this kind of code.

They still gave me 4 out of 4 points because I confirmed I could write functional TypeScript code, though.

Meh.

Part 2 - the test results aggregation/summary algorithm

After that trainwreck, they gave me a problem that I admit was MUCH more relevant to QA Automation work.

The only problem is this role did not need someone to create their own reporter from scratch.

Since the team uses Playwright, and no custom reporters were needed, the person they hired would never need to modify reporter code.

So...why have the candidate prove that they can take test results JSON and manipulate it into a report on how many tests were flaky?

Seems like a waste of everyone's time. There are much better ways to assess QA Automation skills.

Like what? Glad you asked 😁

A better assessment for this position would've been:

  • Give the candidate a bug report and ask them to write a Playwright test that would have caught that bug. Have them talk through their solution before writing any code, and then explain the code they wrote
  • Give the candidate a feature and have them talk through what they would test at the E2E, API/integration, and unit layers. Ask them to explain why. Perhaps have them write an E2E or API test or both, if you can provide access to a dummy API or demo app.
  • Have a QA Engineer on the call to help assess code quality, and ask the candidate to write an Appium script for a mobile app feature. Provide the necessary XML for the candidate to make correct locators for the UI elements. Ask them to explain why they chose certain locator strategies or assertions, and any other scripting choices they made.

Regarding that last one, I've never seen it done in the real world because running automated tests against mobile devices requires a lot of technical setup that you can't reasonably ask candidates to do.

That said, with the right expertise in the room, you don't need to run the test code in order to get good signal on whether or not the candidate knows how to use Appium.

How could the recruiter help here?

Let's assume that Jake was aware of the problem with the tech assessment (or at least suspects a problem).

What can he do? What if he doesn't know what a good technical assessment looks like?

  • Talk to an engineer on the team. The IC's have a handle in hiring but were not part of the live coding challenge. Ask them their thoughts on the challenge and compare it to the requirement
  • Do your research. Go google these technical concepts and educate yourself, or leverage AI for quick notes. It’s easier than ever to leverage AI to build a specific engineer GPT that helps you have stronger, more informed conversations with technical leaders based on the specifics of the role you're hiring for
  • Spend time with your candidates. If Jake had gone deeper with the last person he interviewed, he likely would’ve caught the process issues sooner. By the time I came up to bat, things could’ve been aligned and I might have made it through. I mean, I got thumbs up across the board except for the LeetCode challenge, which knocked me out.

Epilogue

Yes, this is a hiring story, and stories sometimes have epilogues. In this case, a week after I got rejected from this company, I got hired at Arine, my new gig.

But I had given Jake my feedback and he had apparently shared it with the team. He called to give me an update.

Jake told me the QA Director had taken it to heart and reportedly, they were already in the process of revamping the technical assessment.

Who knew? Sometimes decision-makers respect the data and respond to candidate feedback.

Next week, we'll dive into another hiring disaster and how technical recruiters can keep their clients from falling into a "fool's choice" scenario.

Seeya then,

Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, My calendar was full every week. I was scoring interviews through networking but after about 50 conversations, I was exhausted. I realized I was taking the long way around and it was time to get strategic. Here’s what I changed immediately: 1. Hone Your Role Focus I realized my dream was a project solutions role, selling into new territory. But my background didn’t stack up against the senior sales SMEs already in that lane. Instead, I shifted to: Account management roles with...

Reader, When I left my last job in March ‘25, I was burned out. 70+ hour weeks.Undercompensated.And nothing lined up. I hadn’t really looked for a job in a decade. The last time I truly looked for a role, I was 22. The market was different.The competition was heavier.And even with 10 years in recruiting, I wasn’t immune to the chaos. I knew I wanted sales: either staffing expansion or B2B tech sales. But beyond that? I had no sourcing strategy. No geographic focus.No clarity on remote vs...

Reader, In Part 1, you saw how a bleeding-edge tech assessment for a QA Engineer looks in 2026. Now you're going to learn why this is something you can expect to see more of. "The share of new code relying on AI rose from 5% in 2022 to 29% in early 2025" —Complexity Science Hub (Jan 2026) And the trend is expected to continue growing. Sure, there's skepticism about how much AI can speed up high-quality code generation, but AI's ability to do so has only increased. I'm not a believer that...