Skip to content
All posts

How to Know When You're Wrong

Most founders pivot too early or too late. The question isn't 'should I pivot?' — it's 'what does the evidence say?' How to read the signals honestly.

Paul Merrison

Paul Merrison

Founder, Launcherly

The pivot question haunts every founder. Every bad week triggers it. Every good week temporarily silences it. "Should I keep going or should I change direction?" lives in the background of your mind like a browser tab you can't close.

The standard wisdom is contradictory. On one hand: "perseverance is the most important quality in a founder." On the other: "the best founders pivot quickly when the data tells them to." Both are presented as obviously true, which is not super helpful when you're trying to decide whether this week's slow sales are a signal or noise.

The two failure modes

Founders who pivot too early never give their idea a real chance. They talk to five people, get mixed signals, and conclude it's not working. They switch to a new idea, talk to five different people, get mixed signals again, and switch again. Six months later they've "tested" four ideas and validated none of them, because they never accumulated enough evidence on any one to know whether it was viable.

Founders who pivot too late pour months (or years) into something that isn't working because they've confused persistence with stubbornness. Sunk cost fallacy does most of the damage here. "I've spent eight months on this — I can't stop now" is a perfectly human response and a terrible business decision.

Both failure modes share the same root cause: making the pivot decision based on feelings rather than evidence.

Feelings are unreliable narrators

Your emotional state on any given week is a poor indicator of whether your business is working. Good weeks happen because of one encouraging conversation or one promising metric. Bad weeks happen because a prospect ghosted you or your landing page conversion rate dipped. Neither tells you much about the structural viability of your business.

Founders who rely on gut feeling for the pivot decision tend to oscillate. Monday: "this is never going to work." Wednesday: "wait, that conversation went really well, maybe I'm onto something." Friday: "nobody's signing up, I should quit." This is not a decision-making process. It's mood tracking.

What evidence-based pivoting looks like

Instead of "do I feel good about this?", the useful question is "what do I know, and what does it tell me?"

This requires being honest about the state of your evidence across your core assumptions. Something like:

Problem: Have you talked to enough people to know whether the problem is real? "Enough" isn't a fixed number — it depends on the consistency of the signal. If 8 out of 10 people bring up the problem unprompted, that's strong. If 4 out of 10 sort of mention it when prompted, that's weak. If you've talked to 3 people, you don't have enough data to decide anything.

ICP: Do you know who has the problem most acutely? Have you found a pattern in the types of people or companies that care most? If your interviews are all over the map, you might not have a broken idea — you might have an unfocused customer definition.

Solution: For the people who have the problem, does your approach make sense to them? Not "do they like the demo" but "does this address their actual workflow?" If your solution is solving a slightly different problem than the one they have, that's a targeting issue, not a pivot situation.

Distribution: Can you reach these people? If you have a validated problem, a clear ICP, and a sensible solution but you can't get anyone to look at it, the issue is distribution, not product-market fit.

When to actually pivot

A pivot is warranted when your evidence points to a structural problem — one that can't be fixed by iterating on the current approach.

Pivot signals (real evidence of a structural issue):

  • After 15+ interviews, no consistent pattern of pain in your target segment
  • The people who have the problem have already solved it in a way that's good enough
  • Your customer acquisition cost is structurally higher than your customer lifetime value, even with optimistic assumptions
  • The problem is real but the market is too small (not enough people have it)

Not pivot signals (noise that feels like a signal):

  • Your first 5 cold emails got no responses
  • One potential customer said "I wouldn't use this"
  • Your landing page conversion rate is low (have you tested the messaging?)
  • A competitor launched something similar (competition validates the market)
  • You had a bad week

The difference between these two lists is evidence quantity and structural implication. "15 interviews with no pattern" is a body of evidence that tells you something fundamental. "5 unanswered emails" tells you almost nothing — it might mean your email copy is bad, your subject line is wrong, or you sent them at the wrong time.

The partial pivot

Most pivots aren't (and shouldn't be) complete restarts. More often, one piece of your hypothesis turns out to be wrong while others are fine.

Your problem might be real but your ICP might be off — seed-stage companies care, Series A companies don't. That's an ICP pivot, not a full pivot. You keep the problem, change the customer.

Your ICP might be right but your solution approach might be wrong — they want a service, not a software product. That's a solution pivot. Same problem, same customer, different delivery.

Consider two concrete scenarios. Founder A builds a tool to help engineering managers track sprint velocity. After 20 interviews she realizes engineering managers care, but they don't control budget — VPs of Engineering do, and VPs have a different set of priorities. The problem is real, the product works, but the buyer is wrong. She doesn't scrap anything. She repositions, changes her messaging, starts targeting VPs, and adjusts the dashboard to surface what VPs care about (team-level trends, not individual sprint data). Same product, different customer, completely different trajectory.

Founder B is targeting the same VP of Engineering, and the interviews confirm these people have a real pain point around developer onboarding. But everything he hears suggests they want a concierge-style service — someone to actually run the onboarding program — not a self-serve SaaS tool. He keeps the customer, keeps the problem, and rethinks delivery entirely. His first version isn't software at all. It's a spreadsheet and a weekly check-in call. That's not a failure of the original idea. It's the original idea getting smarter.

These partial pivots are much more common than the dramatic "we were a gaming company and now we're an enterprise messaging tool" stories. And they're easier to execute because you're building on evidence you've already gathered, not starting from zero.

The emotional management of being wrong

Here's the thing nobody warns you about: being wrong feels like failure, even when it's the most useful thing that can happen to you.

We have this deeply ingrained narrative that good founders have great instincts. That they see things others don't. And so when evidence tells you that your instinct was off — that the problem isn't what you thought it was, or the customer isn't who you imagined — it feels personal. It feels like you failed a test of founder-worthiness.

This is nonsense, but it's powerful nonsense. It keeps people clinging to broken hypotheses long past the point where the data is screaming at them, because admitting they were wrong means admitting they aren't the visionary founder they hoped to be.

The reframe that actually helps: being wrong early is the product. That's what you're doing in the first months of a company. You are systematically figuring out where your mental model doesn't match reality. Every wrong assumption you catch in week 6 is an assumption that can't blow up in month 12 when you've hired three people and spent your seed round. Wrong answers, found quickly, are the most valuable data points you can collect.

The founders who struggle most with the pivot question aren't lacking data. They're lacking permission — permission to be wrong without it meaning something about who they are. Give yourself that permission early. You'll make better decisions when being wrong isn't an identity crisis.

How to avoid the question entirely

The cleanest way to handle the pivot question is to structure your work so that it doesn't come as a dramatic decision. If you're continuously testing assumptions and tracking evidence, the answer to "should I pivot?" is always visible in your data.

When you can see that your problem assumption is strong, your ICP is narrowing, but your distribution assumption is failing — you don't need to pivot. You need to try a different channel. When you can see that after extensive testing, nobody has the problem you thought they had — the pivot is obvious, and it's not traumatic because you found out in week 6, not month 18.

The founders who agonize over the pivot question are usually the ones who haven't been tracking their evidence systematically. When you don't know what you know, every decision feels like a leap of faith.


Launcherly tracks your evidence and risk scores over time, so you can see what's working and what isn't — before it becomes a crisis. Start your free trial.