How to avoid rolling the dice on a candidate (part 2)


Reader,

In Part 1, you saw how a facilities management SaaS had to take a leap of faith because they didn't assess the technical ability of their candidates.

Quick recap on the technical skills the req asked for:

These were the skills the role actually required based on the conversations with the hiring team:

  • Maestro or Appium (mobile app test automation scripting & debugging)
  • Mobile testing
  • TypeScript
  • CI/CD for running tests (to fix the pipeline that runs the mobile tests or continue expanding it)
  • Playwright (to occasionally help out the other team that handles web automation)

To be clear, none of these skills were assessed during the interview. Not directly.

I was asked questions like “tell me about a time you did X” or “how did you achieve Y using Z technology.” But being able to talk through past achievements doesn’t actually validate the skills required to produce them. The questions were surface-level enough that even a non-engineer could have walked through them and if that’s the depth being assessed, it’s simply not sufficient.

They needed a technical assessment involving analyzing, editing, or creating new test code.

Maestro / Appium

I've talked briefly in a previous email about why this is tough to assess:

You won't be able to run the tests because the environment setup is more than remote interviews allow for.

In theory, if you had an on-site interview for a mobile test automation role, you could set the environment up before the candidate arrived and all they'd have to do is write test code.

But in the remote world, you'd have to ask the candidate to install a bunch of software and configure this and that and it wouldn't usually work out.

I digress.

This does not mean you can't assess the skill.

  1. Write some Appium or Maestro code that you have verified actually works
  2. Break it -- introduce problems for the candidate to detect

The interview would be very low-tech but informative:

  1. Ask the candidate what's wrong with the code
  2. Ask them to explain their answer
  3. Explain the context in which the tests are run, and ask for suggestions on how they would expand the framework to accommodate unmet needs (e.g., are they being run in CI? is there reporting? is there enough info for debugging? are we running the tests on multiple devices? etc.)

This can reveal how much they know about mobile automation frameworks, and how well they can read test automation code.

This is plenty for assessing mid-level candidates.

Mobile Testing

You could combine this with the Maestro / Appium portion by giving the candidate a full description of the feature targeted by the automated test code.

You might ask the candidate questions like these:

  1. Is there any other workflow for this feature you think deserves to be included in the automated test suite?
  2. Which scenarios might you "manually" test but not automate?
  3. Are there any E2E tests that you think should be tested at a different layer? (e.g., API/integration, unit)

Of course, you'll want them to explain why they would automate or not automate certain scenarios, and why they would choose to move a test down from E2E to integration or unit layers.

This reveals plenty about their understanding of how to scale test automation, and how they think about testing new features.

TypeScript

This could be tested implicitly by using TypeScript code for the Maestro / Appium assessment.

CI / CD for running tests

This could be included as part of the Maestro / Appium assessment as well. Just add a GitHub Actions workflow file and have them critique it, just like with the test code.

They don't need to generate new code from scratch. AI will probably end up doing a lot of common generative tasks like that, anyway.

What matters nowadays for assessing competence is verifying their ability to critically analyze code.

If they can properly vet code you give them, they can vet generative AI output when they use tools like Cursor on the job.

Playwright

For this one, there are a couple of options that make sense if you still want to keep the interview short:

  1. Similar read/critique test like for Maestro / Appium and CI / CD for running tests, but for Playwright/TypeScript code
  2. Let them use a free code-assist AI tool like Copilot to generate a Playwright test for a feature, given a demo app that they can point to. Watch how they prompt and how they vet the code generated by the AI. See if they ask about using an MCP tool like Playwriter to let the agent access the browser (this is a "green flag" that indicates they're keeping up with the latest AI tools for test automation).

Most QA Automation jobs I'm seeing nowadays are asking candidates to know how to generate code with AI tools. It makes sense to include it in the interview process. I've already seen this done once.

Conclusion

Aaaand, that's a wrap on the tech assessment options.

Obviously, there might not be time for all of these. The hiring team would need to decide where they'll get the most signal and which assessments they can do without.

But as the recruiter, you have the responsibility to at least know at a high level how skills can be assessed, and what some tradeoffs are between different types of assessments so you can guide your management team accordingly.

You don't have to know Maestro or Appium to know that it's probably a good idea to assess technical ability for a role that requires these skills.

If, at the end of the interview loop, your management team can’t determine who’s stronger and there’s no rubric to anchor the decision, then the process wasn’t properly structured from the start.

Next week, for the grand finale this month, we'll dive into the most innovative tech assessment I'd seen during my job hunt.

(Yes, it lets the candidate use AI)

-Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, PART TWO · WHAT YOU SHOULD DO How to get real information and use it Here’s the shift: stop treating the recruiter call as a formality and start treating it as an intelligence operation. You have more leverage than you think if you ask the right questions. okay, maybe not like this The goal is to figure out what the actual deal is: what the company thinks they need vs. what they realistically need whether you’re a real fit how to talk to the specific manager you’re about to meet Ask...

Reader, PART ONE · RECRUITER CONFESSIONS Here’s what’s actually happening on our end Let me be honest with you about something the industry doesn’t like to admit out loud. By the time a job gets posted and you apply, a lot of things that should have been figured out, haven’t been. Roles change mid-process all the time. Budget shifts. Leadership realizes they don’t actually agree on what success looks like. Someone internally gets considered after the requisition is already open. Or there are...

Reader, On Monday we covered who to contact and when. You did the work, found the right recruiter and team. Now what? Let’s talk about the message itself. I read a lot of outreach. And I'll be direct: most of it sounds the same. Not because the people sending it are bad candidates, but because they're following an outdated professional template that signals "I didn't really think about this." I’m guilty of it myself. I have looked back and read outreach for sales activity I’ve done and...