Why most tech assessments suck (Part 1)


Reader,

Steven here. So, let's talk tech assessments.

I was recently on the hunt for a new job thanks to my previous employer making some questionable business decisions in 2025.

One of the first interviews I got was for a "field service management SaaS" company.

AKA, they serve landscaper business owners who need help managing their operations.

(No names 😉)

But you're here because you want to know what their tech assessment looked like.

It was a 2-parter:

1. Playwright expertise (refactor a shitty Playwright test to show you know what "good" looks like)
2. AI familiarity

What the heck's "AI familiarity", you ask? Yeah...

In this client's view, it meant:

Can the candidate translate a Page Object Model class file from Java to TypeScript using AI tools?

Here is a boiled-down list I took from the req (they've since filled it, but I found the deets on a Glassdoor posting that's still up) of the skills they demand for this role:

  • QA Automation: Hands-on experience building/maintaining web test suites using Playwright or Cypress (specifically using Page Object Models).
  • Coding: Proficiency in TypeScript (preferred), JavaScript, or Python.
  • CI/CD & DevOps: Experience integrating tests into pipelines, managing parallelization, and using Git workflows.
  • Strategic Testing: Ability to write clear test plans and balance exploratory testing with automation.
  • Soft Skills: Strong communication for cross-functional influence and the ability to work with moderate autonomy.
  • Mobile (Bonus): Exposure to Flutter, Dart, or mobile tools like Appium/Patrol is a significant plus.

Alrighty, let's see how the task maps to the req...

✅ Playwright test refactor -- Playwright

✅ Page Object Model class file migration -- Page Object Model

⚠️ Migrate from Java to TypeScript -- TypeScript??

From where I'm sitting, Java-TypeScript migration was a weird choice for testing candidates AI skills.

You might find it interesting that the team laughed when talking about Java, saying it was a tool that belongs in the early 2000s.

In other words, they wouldn't be caught dead using Java on their team.

When I asked them if they were migrating from Java currently, they said no. It was a TypeScript shop.

So why test candidates on migrating from Java?

Not to mention the untested skills in the req:

❌ CI / CD

❌ Test Plans

❌ Exploratory Testing

❌ Flutter exposure (bonus, but still...)

Long story short, I'm not a Java guy, so you probably know how that last part went.

It didn't help that they only gave me like 7 minutes to do it at the end of the call.

The overall point is this: When an assessment doesn’t measure what it’s supposed to, it’s misaligned and ultimately, it’s a poor-quality assessment. This happens far too often in hiring.

So what do you we take a way from this little debacle?

Make sure your manager has aligned their test to test the skills required for the role.

What good does it do to source, recruit and put up ace players when they ultimately get nixed for an assessment that does not make sense. You will find yourself back at square one, and for no good reason.

It doesn't matter if your manager "is technical" or "comes from an engineering background". You still need to take the time to review and advise.

These managers? They're good. They're really good at what they do.

But remind yourself that these folks spend less than 10% of their time hiring and it does not go down as a favorite activity. So be good at what you do.

Your job is to control the process,
before the process controls you.

In Part 2, we'll talk about how to make this assessment better.

Cheers,

Steven

The Better Vetter Letter

Helping tech recruiters vet client requirements and job candidates for technical roles by blending 20+ years of Engineering & Recruiting experience.

Read more from The Better Vetter Letter

Reader, Hey again. It's Steven, the QA Engineer who thinks he can talk about recruiting (I kid, that's what Jaclyn's here for lmao). Story time. While on the job hunt -- between November and January -- I interviewed with a productivity SaaS I shall not name. You'd know who they are, probably (maybe not, some folks I talked to hadn't heard of them). Anyway. I got all the way to the end of their interview process but flunked their Leetcode challenge. I almost wrote about this interview...

Reader, In Part 1, we talked about a bad tech assessment. Let's see how it could be better, 1 skill at a time. Recall that this was the list of skills they needed to assess for the role: is screencapping our own newsletter considered "newsletter-ception"? QA Automation (Playwright & POM) These fall under the QA Automation umbrella in the 1st item. Like I said in Part 1, they did a good job of testing the candidate on Playwright best practices. But they can do better. Instead of asking the...

Reader,In Part 1, I shared a story about a CEO who wanted to 2× his output without 2× headcount and realized his real constraint wasn’t talent. It was infrastructure. So now let’s get tactical. If you’re responsible for: Revenue Rep productivity Tech adoption and how work actually flows through your business …this is the part that matters. 1. Are you even AI/Automation-ready? Before you buy anything, ask yourself: Do we have clear workflows in place? Do we know where time is leaking every...