How to avoid rolling the dice on a candidate (part 2)


Reader,

In Part 1, you saw how a facilities management SaaS had to take a leap of faith because they didn't assess the technical ability of their candidates.

Quick recap on the technical skills the req asked for:

These were the skills the role actually required based on the conversations with the hiring team:

  • Maestro or Appium (mobile app test automation scripting & debugging)
  • Mobile testing
  • TypeScript
  • CI/CD for running tests (to fix the pipeline that runs the mobile tests or continue expanding it)
  • Playwright (to occasionally help out the other team that handles web automation)

To be clear, none of these skills were assessed during the interview. Not directly.

I was asked questions like “tell me about a time you did X” or “how did you achieve Y using Z technology.” But being able to talk through past achievements doesn’t actually validate the skills required to produce them. The questions were surface-level enough that even a non-engineer could have walked through them and if that’s the depth being assessed, it’s simply not sufficient.

They needed a technical assessment involving analyzing, editing, or creating new test code.

Maestro / Appium

I've talked briefly in a previous email about why this is tough to assess:

You won't be able to run the tests because the environment setup is more than remote interviews allow for.

In theory, if you had an on-site interview for a mobile test automation role, you could set the environment up before the candidate arrived and all they'd have to do is write test code.

But in the remote world, you'd have to ask the candidate to install a bunch of software and configure this and that and it wouldn't usually work out.

I digress.

This does not mean you can't assess the skill.

  1. Write some Appium or Maestro code that you have verified actually works
  2. Break it -- introduce problems for the candidate to detect

The interview would be very low-tech but informative:

  1. Ask the candidate what's wrong with the code
  2. Ask them to explain their answer
  3. Explain the context in which the tests are run, and ask for suggestions on how they would expand the framework to accommodate unmet needs (e.g., are they being run in CI? is there reporting? is there enough info for debugging? are we running the tests on multiple devices? etc.)

This can reveal how much they know about mobile automation frameworks, and how well they can read test automation code.

This is plenty for assessing mid-level candidates.

Mobile Testing

You could combine this with the Maestro / Appium portion by giving the candidate a full description of the feature targeted by the automated test code.

You might ask the candidate questions like these:

  1. Is there any other workflow for this feature you think deserves to be included in the automated test suite?
  2. Which scenarios might you "manually" test but not automate?
  3. Are there any E2E tests that you think should be tested at a different layer? (e.g., API/integration, unit)

Of course, you'll want them to explain why they would automate or not automate certain scenarios, and why they would choose to move a test down from E2E to integration or unit layers.

This reveals plenty about their understanding of how to scale test automation, and how they think about testing new features.

TypeScript

This could be tested implicitly by using TypeScript code for the Maestro / Appium assessment.

CI / CD for running tests

This could be included as part of the Maestro / Appium assessment as well. Just add a GitHub Actions workflow file and have them critique it, just like with the test code.

They don't need to generate new code from scratch. AI will probably end up doing a lot of common generative tasks like that, anyway.

What matters nowadays for assessing competence is verifying their ability to critically analyze code.

If they can properly vet code you give them, they can vet generative AI output when they use tools like Cursor on the job.

Playwright

For this one, there are a couple of options that make sense if you still want to keep the interview short:

  1. Similar read/critique test like for Maestro / Appium and CI / CD for running tests, but for Playwright/TypeScript code
  2. Let them use a free code-assist AI tool like Copilot to generate a Playwright test for a feature, given a demo app that they can point to. Watch how they prompt and how they vet the code generated by the AI. See if they ask about using an MCP tool like Playwriter to let the agent access the browser (this is a "green flag" that indicates they're keeping up with the latest AI tools for test automation).

Most QA Automation jobs I'm seeing nowadays are asking candidates to know how to generate code with AI tools. It makes sense to include it in the interview process. I've already seen this done once.

Conclusion

Aaaand, that's a wrap on the tech assessment options.

Obviously, there might not be time for all of these. The hiring team would need to decide where they'll get the most signal and which assessments they can do without.

But as the recruiter, you have the responsibility to at least know at a high level how skills can be assessed, and what some tradeoffs are between different types of assessments so you can guide your management team accordingly.

You don't have to know Maestro or Appium to know that it's probably a good idea to assess technical ability for a role that requires these skills.

If, at the end of the interview loop, your management team can’t determine who’s stronger and there’s no rubric to anchor the decision, then the process wasn’t properly structured from the start.

Next week, for the grand finale this month, we'll dive into the most innovative tech assessment I'd seen during my job hunt.

(Yes, it lets the candidate use AI)

-Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, Wow, are we halfway through February already? That was fast. Hope you had a great V Day over the weekend 🫶 Let's talk about another interview I had back in December.This was with a facilities management SaaS company for an SDET role. A friend from a past job referred me, so I thought it would be a walk in the park.(This is a terrible posture to take when getting referred, by the way. Great recruiters often speak with referrals so make sure you educate your candidate on mindset before...

Reader, In Part 1, we talked about Jake, one of the best recruiters I've ever worked with as a candidate. He was great, but ultimately had a "miss" in his hiring practice which cost his team time. Jake spent hours sourcing and interviewing candidates for this role, only to see every finalist rejected because they failed the assessment. At first, Jake thought he was just sourcing the wrong candidates. When we took time to compare the assessment to the actual job requirements, the misalignment...

Reader, Hey again. It's Steven, the QA Engineer who thinks he can talk about recruiting (I kid, that's what Jaclyn's here for lmao). Story time. While on the job hunt -- between November and January -- I interviewed with a productivity SaaS I shall not name. You'd know who they are, probably (maybe not, some folks I talked to hadn't heard of them). Anyway. I got all the way to the end of their interview process but flunked their Leetcode challenge. I almost wrote about this interview...