|
Reader, In Part 1, you saw how a facilities management SaaS had to take a leap of faith because they didn't assess the technical ability of their candidates. Quick recap on the technical skills the req asked for: These were the skills the role actually required based on the conversations with the hiring team:
To be clear, none of these skills were assessed during the interview. Not directly. I was asked questions like “tell me about a time you did X” or “how did you achieve Y using Z technology.” But being able to talk through past achievements doesn’t actually validate the skills required to produce them. The questions were surface-level enough that even a non-engineer could have walked through them and if that’s the depth being assessed, it’s simply not sufficient. They needed a technical assessment involving analyzing, editing, or creating new test code. Maestro / AppiumI've talked briefly in a previous email about why this is tough to assess: You won't be able to run the tests because the environment setup is more than remote interviews allow for. In theory, if you had an on-site interview for a mobile test automation role, you could set the environment up before the candidate arrived and all they'd have to do is write test code. But in the remote world, you'd have to ask the candidate to install a bunch of software and configure this and that and it wouldn't usually work out. I digress. This does not mean you can't assess the skill.
The interview would be very low-tech but informative:
This can reveal how much they know about mobile automation frameworks, and how well they can read test automation code. This is plenty for assessing mid-level candidates. Mobile TestingYou could combine this with the Maestro / Appium portion by giving the candidate a full description of the feature targeted by the automated test code. You might ask the candidate questions like these:
Of course, you'll want them to explain why they would automate or not automate certain scenarios, and why they would choose to move a test down from E2E to integration or unit layers. This reveals plenty about their understanding of how to scale test automation, and how they think about testing new features. TypeScriptThis could be tested implicitly by using TypeScript code for the Maestro / Appium assessment. CI / CD for running testsThis could be included as part of the Maestro / Appium assessment as well. Just add a GitHub Actions workflow file and have them critique it, just like with the test code. They don't need to generate new code from scratch. AI will probably end up doing a lot of common generative tasks like that, anyway. What matters nowadays for assessing competence is verifying their ability to critically analyze code. If they can properly vet code you give them, they can vet generative AI output when they use tools like Cursor on the job. PlaywrightFor this one, there are a couple of options that make sense if you still want to keep the interview short:
Most QA Automation jobs I'm seeing nowadays are asking candidates to know how to generate code with AI tools. It makes sense to include it in the interview process. I've already seen this done once. ConclusionAaaand, that's a wrap on the tech assessment options. Obviously, there might not be time for all of these. The hiring team would need to decide where they'll get the most signal and which assessments they can do without. But as the recruiter, you have the responsibility to at least know at a high level how skills can be assessed, and what some tradeoffs are between different types of assessments so you can guide your management team accordingly. You don't have to know Maestro or Appium to know that it's probably a good idea to assess technical ability for a role that requires these skills. If, at the end of the interview loop, your management team can’t determine who’s stronger and there’s no rubric to anchor the decision, then the process wasn’t properly structured from the start. Next week, for the grand finale this month, we'll dive into the most innovative tech assessment I'd seen during my job hunt.(Yes, it lets the candidate use AI) -Steven |
Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.
Reader, Wow, are we halfway through February already? That was fast. Hope you had a great V Day over the weekend 🫶 Let's talk about another interview I had back in December.This was with a facilities management SaaS company for an SDET role. A friend from a past job referred me, so I thought it would be a walk in the park.(This is a terrible posture to take when getting referred, by the way. Great recruiters often speak with referrals so make sure you educate your candidate on mindset before...
Reader, In Part 1, we talked about Jake, one of the best recruiters I've ever worked with as a candidate. He was great, but ultimately had a "miss" in his hiring practice which cost his team time. Jake spent hours sourcing and interviewing candidates for this role, only to see every finalist rejected because they failed the assessment. At first, Jake thought he was just sourcing the wrong candidates. When we took time to compare the assessment to the actual job requirements, the misalignment...
Reader, Hey again. It's Steven, the QA Engineer who thinks he can talk about recruiting (I kid, that's what Jaclyn's here for lmao). Story time. While on the job hunt -- between November and January -- I interviewed with a productivity SaaS I shall not name. You'd know who they are, probably (maybe not, some folks I talked to hadn't heard of them). Anyway. I got all the way to the end of their interview process but flunked their Leetcode challenge. I almost wrote about this interview...