Skip to content
All posts

Why Most MVPs Test the Wrong Assumption

You built the thing. You launched it. Nobody came. Most MVPs fail because they test whether it works — not whether anyone needs it. Test the right thing.

Paul Merrison

Paul Merrison

Founder, Launcherly

Most founders treat their MVP as a solution test: "Will people use this thing I built?" But that's usually the wrong question to ask first.

An MVP is supposed to be a learning tool. It exists to collapse uncertainty. But most MVPs don't do that. They test whether the founder can ship something — which was never in question — while ignoring the assumptions that actually determine whether the business will work.

The result is a product that functions perfectly and matters to no one.

The assumption stack

Every startup sits on a stack of assumptions. The dangerous ones aren't about your product — they're about your market:

  1. Problem assumption — "People actually have this pain"
  2. ICP assumption — "These specific people have it worst"
  3. Willingness assumption — "They'd pay to make it go away"
  4. Distribution assumption — "I can reach them"
  5. Solution assumption — "My product solves it"

Most founders skip straight to #5. They build for months, launch, and then discover that assumption #1 was wrong all along. The solution works fine. It just solves a problem nobody was losing sleep over.

This is the central mistake: treating the MVP as a product milestone rather than an experiment. You're not trying to build the first version of your company. You're trying to figure out if a company should exist here at all.

The expensive lesson (a walkthrough)

Here's the pattern we see over and over. Let's make it specific.

Say you're a former engineering manager who was frustrated with how your team tracked on-call incidents. You had a messy spreadsheet, alerts scattered across Slack and PagerDuty, no clean handoff between rotations. So you decide to build an on-call management tool — a clean dashboard that aggregates alerts, tracks incident ownership, and generates handoff summaries automatically.

The timeline looks like this:

  • Month 1-2: You sketch the product, pick your stack, start building. You're energized. The problem feels obvious because you lived it.
  • Month 3-4: You're deep in integration work — PagerDuty API, Slack webhooks, alert deduplication logic. It's harder than expected, but you're making progress. You show it to a couple of engineer friends who say "this is slick."
  • Month 5: You polish the UI, write docs, set up a landing page. You soft-launch to your network.
  • Month 6: Twelve signups. Three actually connect their PagerDuty. One uses it for a week and stops. You conclude that your marketing wasn't strong enough. You start writing blog posts and doing cold outreach on LinkedIn.
  • Month 7-8: Another 30 signups from outreach. Same pattern. People poke around, maybe connect one integration, and disappear. You start wondering if the onboarding is too complicated.
  • Month 9: You rebuild the onboarding flow. A few more users trickle in. Still no retention. You finally start doing proper user interviews and discover the real problem: most engineering teams either use PagerDuty's built-in features (good enough) or have already solved handoffs with a simple Slack bot someone built in an afternoon. The pain you felt was real — for you, at your specific company, with your specific tooling gaps. It wasn't a market.

Nine months. Let's rewind and imagine you'd tested assumptions in order.

  • Week 1-2: You write down your assumptions. The riskiest one is obvious: do other engineering teams actually feel this pain acutely enough to want a new tool? You draft a short screener and reach out to 15 engineering managers on LinkedIn.
  • Week 3-4: Eight of them take a call. You ask about their on-call experience without pitching anything. Three mention frustrations with handoffs, but when you dig deeper, it's a mild annoyance, not a hair-on-fire problem. Five say their current setup is "fine." One mentions that PagerDuty's recent updates basically solved it.
  • Week 5: You've got your answer. The problem exists but it's not acute enough to drive purchasing behavior. You now have a choice: pivot to a different ICP (maybe larger teams with more complex rotations?), pivot to a different problem in the on-call space, or move on entirely.

Five weeks instead of nine months. You're out the cost of a few LinkedIn messages and some time. Not five months of engineering work and the emotional toll of watching something you built gather dust.

The founder who builds first isn't being stupid. They're being a builder, which is what founders do. But the MVP's job isn't to prove you can build. Its job is to produce evidence about the thing you're least sure of.

Types of MVPs for different assumptions

Here's where the conventional MVP advice falls apart. People talk about MVPs as if there's one kind: a stripped-down version of your product. But the right MVP depends entirely on which assumption you're testing.

Landing page MVP — tests problem and willingness. You write a page describing the problem and a solution. You put a signup form or a "buy now" button on it. You drive traffic to it (ads, communities, cold outreach). What you're measuring: do people recognize the problem in your copy? Do they click through? Do they give you their email or — even better — their credit card number? If you describe the pain and nobody resonates, your problem assumption is shaky. If they resonate but won't sign up, your willingness assumption needs work. You haven't built anything. You've tested two assumptions for the cost of a Carrd template and $200 in ads.

Wizard of Oz MVP — tests the solution. The user thinks they're interacting with a product, but behind the curtain, you're doing the work manually. A founder building an AI-powered legal document reviewer might set up a submission form, have a lawyer friend review the documents, and deliver results in a formatted email that looks automated. What you're measuring: does the output actually solve the user's problem? Would they come back? This tests whether your proposed solution — not your implementation of it — delivers value. You're separating "is this the right thing to build?" from "can we build it?"

Concierge MVP — tests ICP. You deliver the service personally, by hand, to a small number of people. No automation, no scale. You're essentially being the product. A founder testing an executive coaching platform might personally coach five different types of executives — first-time VPs, seasoned C-suite, mid-level managers transitioning to leadership — to figure out which segment gets the most value and has the most urgency. What you're measuring: which customer type responds most strongly? Where is the pull? The concierge model is slow and unscalable by design, because the point isn't to build a business yet. It's to figure out who the business is for.

Channel MVP — tests distribution. Before you build anything, you test whether you can actually reach your target customer at a reasonable cost. Run ads targeting your ICP. Post in the communities they frequent. Do cold outreach on the channels you'd use at scale. What you're measuring: can you get in front of these people? What does it cost? Do they engage? A brilliant product with no distribution path is a hobby project. Better to find that out before you build.

Each of these MVPs is cheap, fast, and targeted. None of them require you to write production code. And each one answers a different question. The mistake founders make is defaulting to "build a simple version of the product" regardless of what they actually need to learn.

The "minimum" in MVP

While we're at it, let's talk about what "minimum" actually means, because the industry has mangled this word beyond recognition.

Minimum doesn't mean low quality. It doesn't mean buggy. It doesn't mean shipping something embarrassing and hoping people squint past the rough edges.

Minimum means: the smallest possible scope that lets you test a specific assumption.

A polished landing page with clear copy and a payment button is a high-quality MVP. It's minimum in scope — there's no product behind it — but it's not minimum in craft. It does its job well: it tests whether people recognize the problem and will commit to paying.

A half-built product that sort of works but doesn't clearly test any particular assumption? That's not minimum. It's just incomplete. It's what happens when someone starts building without deciding what they're trying to learn.

The distinction matters because "minimum" has become an excuse for sloppy thinking. Founders ship something half-baked, call it an MVP, get poor results, and then conclude they need to add more features. But the problem wasn't that the product was too minimal. The problem was that it wasn't designed to test anything specific.

If you can't articulate the single assumption your MVP is testing, and what result would cause you to change direction, it's not an MVP. It's a side project with a launch date.

Test the riskiest assumption first

The fix is simple in theory, hard in practice: identify your riskiest assumption and test it before you build anything.

If you don't know whether anyone has the problem you think they have, no amount of product polish will save you. Run 8-10 customer discovery interviews before you write a line of code. And run them properly — ask about their life and their problems, not about your idea. The moment you start pitching, you stop learning.

If you know the problem is real but don't know who feels it most acutely, don't build for "everyone." Find the segment where the pain is sharpest — your beachhead. A concierge MVP with three different customer types will tell you more in two weeks than a product launch aimed at a blurry "target market."

If you know the problem and the ICP but don't know whether they'll pay, test willingness before you test usability. Put up a landing page with a price. Run a pre-sale. The data from 50 landing page visitors is worth more than the opinion of 50 friends who said "yeah, I'd probably pay for that."

Designing your MVP backwards

Here's a practical framework for getting this right. Instead of starting with "what's the simplest version of my product I can ship?", start from the other end.

Step 1: List your assumptions. All of them. Problem, ICP, willingness, distribution, solution. Be brutally honest about which ones are validated and which are gut feelings dressed up as knowledge.

Step 2: Rank by risk. For each assumption, ask: "If this is wrong, does the rest of the business collapse?" and "How confident am I that this is true, based on actual evidence (not vibes)?" High consequence and low confidence = your riskiest assumption.

Step 3: Pick the right experiment. Match the assumption to the cheapest, fastest experiment that can test it. Problem assumptions need conversations and observation. ICP assumptions need exposure to multiple segments. Willingness assumptions need commitment mechanisms (signups, payments, deposits). Distribution assumptions need channel tests. Solution assumptions need some form of the product, real or simulated.

Step 4: Define your kill criteria. Before you run the experiment, decide what result would make you change course. "If fewer than 3 out of 10 interviewees mention this problem unprompted, we'll pivot the problem hypothesis." "If fewer than 2% of landing page visitors sign up, we'll revisit our ICP." Without kill criteria, every experiment becomes a Rorschach test — you'll see whatever you want to see.

Step 5: Run it, learn, repeat. Run the experiment, collect the data, update your assumptions. Then go back to step 2. Your next riskiest assumption might have changed based on what you learned. Maybe the problem is validated now but you're newly uncertain about distribution. Great — test distribution next.

This is what "working backwards" looks like in practice. You're not starting from the product and asking "is this good enough?" You're starting from your ignorance and asking "what's the fastest way to know more?"

How to identify your riskiest assumption

Ask yourself: "If this turns out to be wrong, does anything else matter?"

That's your existential risk. Everything else is downstream.

At Launcherly, we call this your risk landscape — a scored map of every assumption your business depends on, ranked by severity. The highest-severity risk is always the one to address next, because if it fails, nothing else you build will matter.

Most founders, when they're honest with themselves, know which assumption is the scariest. It's the one they've been avoiding. The one they've been substituting building for testing, because building feels safer than asking a question you might not like the answer to.

Your MVP is a question, not an answer

Your MVP should test your riskiest assumption, not your favorite feature. Figure out what could kill your startup, and test that first. Everything else is optimization.

The founders who move fastest aren't the ones who ship the most code. They're the ones who learn the fastest — who design experiments around their biggest unknowns and are willing to throw away their assumptions when the data disagrees.

Build less. Learn more. And make sure the thing you're building is actually teaching you something you need to know.


Launcherly helps founders identify and prioritize their riskiest assumptions, then provides AI-powered guidance to test them systematically. Start your free trial.