Don’t be “AI-shy” (part 1)


Reader,

I've saved the best for last, truly. This one was a software testing services firm.

For the last several emails I've written about interviews from my job hunt, I provided the req.

Well, this place didn't have reqs. You didn't find them -- they found you. It's a cool story, but it makes it harder to write about the interview process objectively.

Subjectively? It was awesome.

This firm wanted to hire Quality Engineers who are mature enough to make solid judgment calls about testing and automation. So, not juniors...seniors.

They were specifically hiring folks who already had a working knowledge of tools like Cursor because they advertised that their QA Engineers knew how to use AI tooling to speed up test automation activities.

Some tools a candidate would want to bring to this firm's tech assessment:

  • Playwright
  • TypeScript
  • Cursor, Copilot or Claude Code
  • Knowledge of context management strategies for agentic coding (Rules, Skills, etc.)
  • Playwright MCP for letting the agent access the browser

The Assignment

This one, unlike the other interviews I chatted about, was a take-home assignment.

  1. Given a demo app and 6 test scenarios to automate, use an AI tool to help you create a data-driven test automation solution using Playwright and TypeScript.
  2. Share the solution in a GitHub repo along with a video of you talking through your solution

The idea was for the candidate to complete it in under 2 hours.

It took me 30 minutes to meet minimum requirements using Cursor.

But of course, why stop there? I spent another 90 minutes adding bells & whistles and doing a final proofread.

I aced the heck out of this assessment because it was asking me to do what I do on the job, not some hackneyed Leetcode problem someone got from ChatGPT.

Why this works

On the surface, this seems like an easy assessment to cheat.

  • You get to use AI to write code for you
  • Nobody's watching you

But in fact, there are traps that candidates would not realize they'd failed until it was too late.

The "AI Slop" Trap

If the candidate doesn't know what they're doing, it'll show up.

That's because the reviewer of the take-home assessment is themselves a seasoned Engineer.

Everyone hired goes through this process.

They know what "good" looks like for the roles they see in the market, so it's easy to:

  • hone in on specific assessments
  • curate in a way the candidate can use AI
  • spot AI slop from a mile away

This assessment is designed to catch the candidate who submits a half-baked solution that "works", while confirming a senior's ability to vet the design choices & quality of the code AI is generating (or guide it with better context to produce better code from the get-go).

Even so, at least they're honest about this in the instructions. There's a fine-print section at the top where they ask the candidate to double-check AI's work.

The "Junior" Trap

The assessment is clear about what it asks for, but it doesn't tell you what else you could add beyond the minimum.

A junior would be content completing this challenge in a timely manner.

A senior would know that there are ample opportunities to show off if they want to dedicate the extra time (and prompts):

  • CI/CD pipeline to run the tests
    • Manual trigger to let the reviewer run the tests on-demand
    • Downloadable test execution summaries
    • Matrix strategy to run the tests on multiple browsers
  • Fine-tuned locators (no such thing as "perfect", but you can get damn close 😜)
    • avoid using crappy xpaths or css selectors
    • choose `getByRole` or `getByTestId` wherever possible
    • worst case, chain off of a resilient locator if you have to target a hard-to-reach element

Those are 2 areas to show off, and they show that you have opinions about test automation frameworks, not just the ability to have AI script some test scenarios for you.

In Conclusion...

If you want an "AI-powered QA Automation Engineer", there are suitable evaluations you can create.

This firm's approach was a no-bs way to test the exact skills required for the job. Success is defined up front, the process is clear to all parties and aligns with the funnel of openings. Now this is streamlined.

In Part 2, I'll go deeper on how tech assessments are evolving, and why this kind of approach is what I expect to see more of.

-Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, In Part 1, you saw how a bleeding-edge tech assessment for a QA Engineer looks in 2026. Now you're going to learn why this is something you can expect to see more of. "The share of new code relying on AI rose from 5% in 2022 to 29% in early 2025" —Complexity Science Hub (Jan 2026) And the trend is expected to continue growing. Sure, there's skepticism about how much AI can speed up high-quality code generation, but AI's ability to do so has only increased. I'm not a believer that...

Reader, In Part 1, you saw how a facilities management SaaS had to take a leap of faith because they didn't assess the technical ability of their candidates. Quick recap on the technical skills the req asked for: fwiw, not all of this seemed necessary from the conversations I had These were the skills the role actually required based on the conversations with the hiring team: Maestro or Appium (mobile app test automation scripting & debugging) Mobile testing TypeScript CI/CD for running tests...

Reader, Wow, are we halfway through February already? That was fast. Hope you had a great V Day over the weekend 🫶 Let's talk about another interview I had back in December.This was with a facilities management SaaS company for an SDET role. A friend from a past job referred me, so I thought it would be a walk in the park.(This is a terrible posture to take when getting referred, by the way. Great recruiters often speak with referrals so make sure you educate your candidate on mindset before...