"Describe a coffee cup."
That's not a hypothetical. That was the opening question in an actual product management interview. The interviewer sat back, crossed their arms, and waited while the candidate tried to figure out what the "right" answer was.
There isn't one. There can't be. "Describe a coffee cup" doesn't test product thinking. It tests whether the candidate guesses what the interviewer wants to hear. And the interviewer doesn't know what they want to hear either — they're waiting for something that "feels" right. Something that matches their own mental model of what a good PM sounds like.
That's not interviewing. That's heuristic bingo.
The Problem With PM Interviews
Hiring in product management is broken. Not slightly broken — fundamentally, structurally broken. And most companies don't realize it because the process feels rigorous.
Here's what a typical PM interview loop looks like:
- A product case study: "How would you build a product for [vague scenario]?"
- A brain teaser: "How many gas stations are in the United States?"
- A behavioral round: "Tell me about a time you dealt with ambiguity."
- A "product sense" evaluation: "What's your favorite product and why?"
Every one of these is a bias machine. And every one of them is accepted as standard practice because "that's how PM interviews work."
Why Brain Teasers Don't Work
Brain teasers — estimating gas stations, describing coffee cups, designing a parking lot for an airport — test one thing: the candidate's ability to perform under arbitrary constraints with no real-world information.
That's not what product managers do. Product managers operate with real constraints, real data (even if incomplete), and real stakeholders. The skills you need for brain teasers — quick estimation, confident speculation, polished presentation under pressure — are the skills of a consultant in a case interview, not a product leader building with their team.
But here's the worse problem: brain teasers have no rubric. There's no scoring criteria. The interviewer listens for 10-15 minutes and then decides whether the candidate "thinks well." What does "thinks well" mean? Whatever the interviewer unconsciously pattern-matches to their own thinking style.
That's not assessment. That's confirmation bias with a hiring budget.
The Refrigerator Problem
I once watched a PM interview where the candidate was asked to "redesign a refrigerator." The candidate — a thoughtful, experienced product leader — spent twenty minutes methodically working through user research, market analysis, and feature prioritization for a hypothetical refrigerator.
Twenty minutes. On a refrigerator. In a job interview for a B2B SaaS company.
The candidate was rejected. The feedback? "Didn't demonstrate enough product intuition." What actually happened: the candidate's structured approach didn't match the interviewer's preference for quick, flashy ideas. The interviewer wanted creativity. The candidate offered rigor. Neither was wrong — but without a rubric, the interviewer's preference determined the outcome.
Every PM interview without a scoring rubric is a coin flip dressed up as evaluation.
Four Ways PM Interviews Are Biased
- Similarity bias. Interviewers consistently rate candidates higher when they share backgrounds, communication styles, or thinking patterns. "Good culture fit" usually means "thinks like me." This is why product teams at most companies look remarkably similar — the interview process filters for clones.
- Anchoring to first impressions. Research consistently shows that interviewers form an opinion in the first 30 seconds and spend the remaining time confirming it. Your "coffee cup" answer in minute one determines your outcome more than your thoughtful strategy discussion in minute forty.
- The confidence premium. Confident candidates score higher across every dimension, even when their answers are objectively worse. In product interviews — where "product sense" and "executive presence" are evaluation criteria — confidence isn't just correlated with success. It is the success criteria. This systematically disadvantages candidates who are thoughtful, deliberate, or simply less performative.
- Survivorship in interview design. The people who designed your interview process were hired through a similar process. They succeeded in it, so they believe it works. They don't see the qualified candidates it filters out — they only see the ones it lets through. The process perpetuates itself without ever being validated.
What Actually Works
If brain teasers and hypotheticals are out, what's in? Three things:
1. Use rubrics for every question
Before the interview, define what a 1, 3, and 5 looks like for every question. Not "good answer" versus "bad answer" — specific, observable criteria. "Candidate identified three stakeholder groups" is a rubric. "Candidate demonstrated product thinking" is vibes.
When interviewers have rubrics, similarity bias drops significantly. They're scoring against criteria, not against their mental model of "what a good PM sounds like."
2. Use work samples instead of hypotheticals
Instead of "design a product for X," try "here's a real problem our team faced last quarter. Walk me through how you'd approach it." Give the candidate real context — actual data, actual constraints, actual stakeholders.
Better yet: give them a take-home exercise with real materials. Not a 40-hour project — a focused, 2-hour exercise that tests the actual skills the role requires. What does a PM actually do at your company? Test for that, not for generic "product thinking."
3. Separate evaluation from discussion
The person conducting the interview shouldn't be the only person evaluating the candidate. Have interviewers submit their rubric scores before the debrief conversation. Otherwise, the most senior person in the room anchors everyone else's assessment. The same principle applies here as in product decisions — independent assessment before group discussion produces better outcomes.
The Diversity Connection
This isn't just about hiring better PMs. It's about hiring different PMs.
When your interview process selects for confidence, quick thinking, and similarity to existing team members, you get a homogeneous team. Homogeneous teams feel efficient — less conflict, faster consensus, shared assumptions. But they have blind spots. They miss edge cases that different perspectives would catch. They build products that work for people like them and fail for everyone else.
Structured interviews with rubrics, work samples, and independent evaluation don't just reduce bias. They expand who can succeed in your process — and that expands the range of thinking your team brings to strategic product decisions.
Start Here
Look at your team's current PM interview process. For every question, ask: "Is there a rubric that defines what a good answer looks like?"
If the answer is no — if evaluation depends on whether the candidate's response "feels right" to the interviewer — you're running heuristic bingo. You're hiring based on pattern-matching, not assessment. And the patterns you're matching are your own biases reflected back at you.
Fix the rubric first. Everything else follows.
Is your hiring process a bias machine?
Fixing PM interviews isn't about new questions — it's about better systems. I help product teams redesign their hiring processes to reduce bias and identify the candidates who'll actually succeed in the role — not just the ones who interview well.
Book a Clarity Call — 30 minutes, no pitch. Just clarity on where bias is costing your team the best candidates.
Not ready for a call? Subscribe to The Adam Thomas for frameworks and honest takes on product leadership, delivered biweekly.
Was this article helpful?
Thanks for your feedback!
Want More Like This?
Join 500+ product leaders getting insights on decision-making and team alignment.
Subscribe FreeNo spam. Unsubscribe anytime.