(Part 1) (Part 2) (Part 4)

Many marketing interviews are terrible at predicting job performance.

You ask about their experience. They tell you about campaigns they worked on. You ask behavioral questions. They’ve rehearsed responses since college.

None of it tells you what you actually need to know. Can this person tell good from bad? Do they have judgment?

I’ve started using a different approach. I give them AI-generated work and ask them to fix it.

How It Works

Before the interview, I use ChatGPT or Claude to create something relevant to the role. An email campaign. A blog post. Product copy. Whatever matches what they’d actually be doing.

I don’t use a perfect prompt. I use a mediocre one. The kind a busy manager might actually write at 4pm on a Friday.

The AI output is usually pretty good. Sometimes it’s 80% there. Sometimes it’s impressively wrong in subtle ways. That’s the point.

In the interview, I hand it to them and say, “I had AI generate this. Take 10 minutes and tell me what you’d change and why.”

Then I shut up and watch.

What You Learn

The responses split into three categories pretty quickly.

Type 1 makes surface-level edits. They fix a typo. Maybe adjust some formatting. They can spot obvious errors but can’t evaluate strategic quality.

Type 2 tears it apart and rebuilds it. They explain what’s missing, what’s off-brand, where the logic breaks down. They can articulate why the tone is wrong for the audience or why the call-to-action won’t work. These are the people you want.

Type 3 is rarer but interesting. They say, “Actually, this is pretty good for [specific use case], but it completely misses the mark for [the actual objective].” They’ve identified a fundamental strategic problem before getting into tactical fixes.

Type 2 and Type 3 are showing you judgment. Type 1 is showing you someone who can follow instructions but can’t think critically.

Why This Works

When I worked with companies navigating social media in the early 2010s, the best interview question wasn’t “what’s your engagement rate?” It was showing them three competitor posts and asking which one they’d model and why.

Same principle here. You’re testing whether they can recognize quality, articulate problems, and think strategically about solutions.

Research from the lending industry proves the point. When humans and AI evaluated information together, default rates dropped to 3.1% MarTech compared to either working alone. But that only works if the human knows how to evaluate what the AI is producing.

What Bad Answers Sound Like

“I’d make it more creative.” (Translation: I have opinions but can’t articulate reasoning.)

“I’d run it through Grammarly first.” (Translation: I’m focused on mechanics, not strategy.)

“It’s fine, I’d probably just use it.” (Translation: I can’t evaluate quality or I’m afraid to critique anything.)

What Good Answers Sound Like

“The headline is clickbait-y but the article doesn’t deliver on the promise. I’d either change the headline to match the content or rewrite the intro to fulfill what the headline sets up.”

“This reads like it was written for a general audience, but our buyers are technical. I’d cut the explainer paragraphs and get to the specifications faster.”

“The tone is way too formal for our brand. We sound like a law firm, not a startup. I’d rewrite this entire section to sound more conversational.”

See the difference? They’re not just pointing out problems. They’re explaining the why behind the problem and proposing a fix that connects back to strategy or audience.

This also reveals something important: Are they comfortable saying “this isn’t good enough”? Because in six months, their job will involve telling AI to try again. A lot. If they can’t critique a piece of copy in an interview, they probably won’t manage AI output effectively on the job.

Try It Yourself

Next time you’re hiring, spend 5 minutes generating something with AI. Hand it to your candidate. Give them 10 minutes.

You’ll learn more than you would from an hour of behavioral questions.