Skip to content
All posts

What 8 Customer Interviews Actually Tell You (and What They Don't)

Customer interviews are the gold standard of early-stage validation. But most founders don't know what to do with the results — or their limits.

Paul Merrison

Paul Merrison

Founder, Launcherly

Everyone tells you to do customer discovery. Read The Mom Test. Talk to 10 people. Validate the problem. Good advice, mostly. But there's a gap between "do interviews" and "now I know what to do" that nobody really talks about.

The gap is synthesis. Individual conversations are interesting. They're sometimes surprising. But a single interview is an anecdote, not evidence. The signal lives in the patterns across interviews, and extracting those patterns is harder than anyone makes it sound.

The interview itself is the easy part

Getting 8 people on a call is tedious but straightforward. Asking non-leading questions is a learnable skill. Not pitching your solution too early — harder, but doable with practice. The Mom Test gives you a solid framework for all of this.

The hard part comes after. You hang up the eighth call and you have... what? Eight pages of notes. Some contradictions. A few surprising quotes. One person who seemed really excited (but they seem excited about everything). Two people who had the problem but solved it differently than you expected. One who didn't have the problem at all.

What do you do with this?

Most founders pattern-match too aggressively

The natural tendency is to focus on the signal that confirms your hypothesis. You remember the quotes that support your idea and mentally discount the ones that don't. This isn't dishonesty — it's just how brains work. Confirmation bias is the default setting.

So you walk away thinking "7 out of 8 people confirmed the problem!" when a more honest reading might be "3 people had the problem acutely, 2 had it mildly, 2 had it but already solved it, and 1 didn't have it at all." These are very different conclusions with very different implications for what you do next.

What to actually look for

The useful output of customer interviews isn't a conversion rate ("7 out of 8 said yes!"). It's structured evidence about specific assumptions. After eight conversations, you should be able to answer questions like:

About the problem: Do they bring it up unprompted, or only when you ask? How do they describe it? What words do they use? How often does it happen? What does it cost them (time, money, frustration)?

About current solutions: How are they solving this today? Are they happy with their workaround or frustrated by it? How much effort do they put into the current solution? What would they change about it?

About willingness to change: Have they actively looked for alternatives? What would a solution need to do for them to switch? What would they pay? (And do you believe them?)

About your ICP: Are certain types of people experiencing this more acutely? Is there a pattern in company size, role, stage, or industry that correlates with pain severity?

These questions produce structured data. Structured data is what lets you compare across interviews and identify real patterns, rather than just remembering whichever conversation was most memorable.

The five questions you're probably not asking

Most interview guides lean heavily on opinion questions. "Would you use a product that does X?" "How important is this problem to you?" These feel productive in the moment, but they're collecting opinions, not evidence. People are terrible at predicting their own behavior. What you want is past behavior — what they actually did, not what they think they'd do.

Here are five questions that cut through the politeness:

"Walk me through the last time you dealt with this problem." This is the single most useful question in customer discovery. It forces specificity. Instead of "yeah, onboarding is painful," you get "last month we hired two engineers and it took three weeks before either of them shipped their first PR, and I spent about six hours a week on it personally." That's data. That tells you frequency, cost, and who bears the burden.

"What did you do about it?" Follow the last-time question with this one. If they did nothing, the problem might not be acute enough to pay for. If they built an internal workaround, you've just learned what your competition actually is (hint: it's not the other startup in the space — it's the spreadsheet they already have). If they bought something, you know what price the market bears and what features mattered enough to pay for.

"What's the most annoying part of how you handle this today?" This gets at the gap between the current solution and the ideal one. The answer tells you where your product needs to be genuinely better, not just different. "Different" doesn't get people to switch. "Dramatically better at the part that drives me crazy" does.

"Who else is involved when this problem comes up?" Buying decisions almost never live with one person. This question maps the decision-making process before you need it. If your interviewee loves the idea but their CFO controls the budget, you now know you need a CFO-friendly ROI story, not just a better demo.

"If you could wave a magic wand and fix one thing about this, what would it be?" Open-ended and slightly whimsical, which is the point. It sidesteps the instinct to be polite and lets people tell you what they actually care about most. The answers often surprise you — the thing they'd fix first isn't always the thing you assumed was most important.

Notice what these questions have in common: they're all about behavior, cost, and process. Not opinions about hypothetical products. The interview should feel less like a survey and more like a journalism conversation. You're reconstructing what actually happened, not workshopping what might happen.

The synthesis problem

Eight interviews produce a lot of information. The challenge is turning scattered observations into conclusions you can act on. This usually means going through your notes (ideally while they're fresh) and tagging observations against specific assumptions.

"Interviewee 3 mentioned code review bottlenecks unprompted" — tags to your problem assumption.

"Interviewee 5 is already using Copilot for this" — tags to competitive risk.

"Interviewees 2, 4, and 7 all mentioned the same pain point but at different severity levels" — tags to ICP refinement.

When you do this systematically, patterns emerge that you wouldn't see by just reading through your notes linearly. You might discover that everyone at seed-stage companies feels the problem acutely, but Series A companies have already hired their way out of it. That's a finding. That changes your strategy.

Here's what a concrete synthesis might look like after eight interviews about developer onboarding pain:

  • Problem (mentioned unprompted): 6 out of 8. Strong signal. Two who didn't mention it were at companies with fewer than 5 engineers — probably below the threshold where onboarding becomes painful.
  • ICP pattern: Clear split. Engineering managers at 20-80 person companies feel this acutely. Engineering managers at 10-person companies said it was annoying but manageable. The two largest companies (200+) had dedicated teams handling it already.
  • Current solution: 5 out of 8 using some combination of Notion docs and Slack channels. Two using an internal wiki that nobody maintains. One using nothing ("we just pair program for a week").
  • Willingness to switch: 4 out of 8 said they'd actively looked for something better in the past year. Two had evaluated a competitor and decided it was too heavyweight. This tells you there's purchase intent, and it tells you "lightweight" is a positioning advantage.
  • Budget holder: In 6 out of 8 conversations, the interviewee said they'd need to get sign-off from their VP or Head of Engineering. Only two had discretionary budget to buy tools on their own.

That's not a spreadsheet exercise for the sake of process. Each of those bullet points changes a decision. The ICP split tells you who to target first. The budget-holder data tells you your sales motion needs to reach VPs, not just ICs. The competitor intelligence tells you how to position. You couldn't see any of this by reading through your notes top to bottom and going with your gut.

What 8 interviews can't tell you

It's worth being honest about the limitations. Eight interviews give you directional signal, not statistical significance. You're looking for patterns strong enough to act on, not proof.

They also can't tell you whether people will actually pay. Stated willingness to pay in an interview is roughly as reliable as a New Year's resolution. People will tell you what they think you want to hear, and they genuinely believe it in the moment. Actual purchasing behavior is a different thing entirely.

And they can't tell you about distribution. Knowing that your target customer has the problem doesn't tell you whether you can reach them affordably. That's a separate assumption that requires a separate test.

The point of interviews isn't to answer every question. It's to build enough evidence on your problem and ICP assumptions that you can move forward (or pivot) with something more than a gut feeling.

The output that matters

After eight good interviews, you should have evidence — not certainty, but evidence — for or against your core assumptions. You should know which assumptions are looking solid, which are shaky, and which you haven't tested yet.

That's the real deliverable. Not "validation" in the binary sense, but a shift in your understanding of where the risks are. Some risks go down (the problem is real, good). Some go up (there's a competitor you didn't know about, concerning). Some remain unchanged (you still don't know about distribution, that's next).

This is progress. Messy, incremental, evidence-based progress. It doesn't feel as clean as "validated!" or "invalidated!" but it's much closer to the truth.

What to do after interview 8

You've done the interviews. You've synthesized the patterns. You have directional signal on your core assumptions. Now what?

The most common mistake at this point is jumping straight to building. The interviews gave you signal on whether the problem exists and who has it. They did not give you signal on whether people will pay for your specific solution, whether your distribution channel works, or whether you can deliver the thing profitably. Those are separate assumptions that need separate tests.

The next step is designing a test — not building a product.

A good post-interview test targets your riskiest remaining assumption with the minimum amount of work. If your interviews confirmed the problem but you're unsure about willingness to pay, that test might be a landing page with a price and a "buy now" button (even if there's nothing behind it yet). If the problem is clear and people said they'd pay, but you're not sure you can reach them, the test might be running $500 worth of ads to see what your cost-per-click and conversion rate look like in your target channel.

If the biggest remaining risk is whether your solution approach actually works, the test might be a concierge version — you manually deliver the outcome your product would deliver, for five customers, and see if it sticks. This is unglamorous work. It doesn't scale. That's the point. You're not building a business yet. You're buying information.

The shape of the test depends entirely on where your risk is highest after interviews. And that's why the synthesis step matters so much — without it, you don't know which risk to tackle next, so you default to the thing that feels most productive (usually building) rather than the thing that's most de-risking (usually something much scrappier).

Eight interviews is a beginning, not an ending. They give you enough signal to move forward with intention instead of hope. Use them to figure out what you test next, not to convince yourself you're done testing.


Launcherly helps you turn interview insights into structured evidence, linked to your assumptions and risk scores. Start your free trial.