Skip to content
All posts

The Founder's Prioritization Problem

When everything feels urgent and you have no team, how do you decide what matters? Standard prioritization fails solo founders. Think in risks, not tasks.

Paul Merrison

Paul Merrison

Founder, Launcherly

You wake up Monday morning with a to-do list that would be ambitious for a five-person team. You need to follow up with the three people who expressed interest last week. You should probably fix that bug in the onboarding flow. Your landing page copy is mediocre and you know it. You haven't posted on LinkedIn in two weeks. Someone suggested you try cold outreach. Your pricing page doesn't exist yet.

Everything on this list is reasonable. Everything feels important. You have approximately one of you.

This is the founder's prioritization problem, and it's not solved by better to-do lists or time-blocking or getting up at 5am. It's a structural issue: when you're responsible for everything, the sheer volume of reasonable things to do makes it nearly impossible to identify the right thing to do.

Why standard prioritization frameworks fail

Most prioritization advice boils down to some version of the Eisenhower matrix or ICE scoring. Important vs. urgent. Impact vs. effort. These frameworks work fine when you have a stable context — a known product, known customers, known distribution channel — and you're optimizing execution.

Early-stage founders don't have a stable context. You're not optimizing. You're searching. You don't know if your product is right, if your customers are who you think they are, or whether your distribution channel works. In that environment, scoring tasks by "impact" is guesswork, because you don't have enough information to know what "impact" means yet.

Prioritizing your task list when your business model is unproven is like rearranging deck chairs on a ship that might be pointed at the wrong continent.

Think in risks, not tasks

A more useful lens: instead of asking "what should I work on?", ask "what could kill this business fastest?"

Every early-stage startup is a bundle of unproven assumptions. Some of those assumptions, if wrong, are fatal. Others are merely inconvenient. The difference between a good week and a wasted week often comes down to whether you spent your time on the fatal ones.

Consider two possible uses of your Thursday:

Option A: Spend the day improving your landing page design. It does look a bit amateur. Probably hurting conversion rates.

Option B: Spend the day running three customer discovery calls to find out whether the problem you're solving is actually painful enough that people would pay to fix it.

If your problem assumption is wrong, your landing page is irrelevant — no amount of polish will sell a painkiller for a headache nobody has. Option B addresses the thing that could make everything else meaningless. Option A is optimization of something that might not matter.

The risk landscape

What you actually need is a map of your assumptions ranked by severity. Something like:

  • Existential: "We don't know if anyone has this problem" — if wrong, nothing else matters
  • High: "We don't know if our target customer can be reached affordably" — could force a complete rethink
  • Medium: "Our pricing model is untested" — important but adjustable
  • Low: "Our onboarding flow is clunky" — annoying but fixable

When you see your risks laid out like this, prioritization becomes much more obvious. You work on the existential risks first. Not because the low-severity stuff doesn't matter, but because resolving the existential risks is a precondition for everything else being worth doing.

The risk audit in practice

This isn't abstract. Let's walk through what a real risk audit looks like.

Say you're building a tool that helps freelance designers manage client feedback. You've been a freelance designer yourself, you know the pain, and you've sketched out a product. Before you open your code editor, sit down and list every assumption baked into this idea.

Your list might look something like this: Freelance designers find client feedback painful enough to pay for a solution. They'd pay $30/month for it. You can reach them through design communities and Twitter/X. They're not adequately served by existing tools like Notion or Google Docs plus email. They'd trust a new, unknown tool with their client communications. The product can be built by one person in a reasonable timeframe.

Now categorize each one honestly.

"Designers find client feedback painful enough to pay" — that's existential. If this is wrong, there's no business. You might know it's painful because you lived it, but "painful" and "painful enough to pay $30/month to a startup" are very different claims. Your personal experience is a hypothesis, not evidence.

"They'd pay $30/month" — that's high severity. If the real willingness to pay is $10/month, your unit economics might not work, especially if acquisition costs are significant. But it's not necessarily fatal — you might be able to adjust the pricing model.

"You can reach them through design communities" — that's high severity too, bordering on existential. If your only viable channel turns out to be expensive paid ads, the business might not pencil out at $30/month.

"Existing tools aren't adequate" — that's medium. If Notion covers 80% of the use case, you need to be meaningfully better at the remaining 20%, but that's a product design challenge, not a show-stopper.

"They'd trust a new tool with client communications" — that's medium. Trust can be built with time, social proof, and a solid onboarding experience.

"One person can build this in a reasonable timeframe" — that's low. If it takes longer, you adjust scope or timeline. Annoying, not fatal.

Now look at the list. You have two existential-to-high risks: problem severity and reachability. Those are your first two weeks of work. Not building. Testing.

The emotional trap

The problem with risk-based prioritization is that existential risks are scary. They're the questions you kind of don't want to answer, because the answer might be "no." So you find yourself gravitating toward the medium and low severity stuff — tweaking the website, refining the pitch deck, researching competitors — because those activities feel productive without threatening the premise of your entire venture.

This is completely understandable and also completely backwards. The sooner you confront the scary questions, the sooner you either confirm you're on the right track (and can invest with confidence) or discover you need to adjust (while you still have runway to do so).

Founders who spend six months avoiding the existential questions don't avoid the risk. They just push the moment of reckoning to a point where they have fewer options.

There's a specific psychological mechanism at work here, and it's worth naming explicitly: we gravitate toward tasks where we feel competent and where the outcome is within our control. Designing a logo, picking brand colors, setting up a beautiful Notion workspace, building that one more feature — these activities produce a visible result, they exercise skills we're good at, and they never tell us "no." They feel like progress because they produce artifacts. Something you can point to at the end of the day and say "I did that."

Customer discovery calls, on the other hand, are uncertain, awkward, and frequently humbling. You might hear that nobody cares about the problem you've spent three months thinking about. You might learn that your entire thesis is wrong. There's no artifact to show for it — just notes and a slightly adjusted worldview. It doesn't feel like building a company.

So founders do what humans do: they optimize for emotional comfort. They spend Monday choosing between Stripe and Paddle for payment processing when they don't have a single paying customer. They spend Tuesday afternoon agonizing over their Twitter bio. They spend Wednesday building an admin dashboard for a product that has zero users. All of it feels productive. None of it moves the needle on the question of whether this business should exist.

The tell is when you look at your week and everything you did was reversible, low-stakes, and within your existing skill set. If nothing you did this week could have resulted in bad news, you're probably avoiding the work that matters.

What this looks like in practice

Pick the one assumption that, if wrong, would make you stop working on this. Not the one that would be annoying. The one that would be terminal.

Now ask: what's the fastest, cheapest way to test it? Usually it's not building something. It's talking to people, running a small experiment, or putting up a landing page to see if anyone cares.

Whatever that test is — that's your priority for the week. Not the fifteen other things competing for your attention. Those can wait. This one can't.

Once you've got evidence on the existential risk, move to the next highest severity. Repeat. Over time, your risk landscape shifts from "everything is uncertain" to "we know these things are solid, and here's what we still need to figure out." That's progress — real, evidence-based progress, not just motion.

One assumption per week

If "test your riskiest assumption" still feels vague, here's a concrete cadence that works: one assumption, one test, one week.

Monday, you decide which assumption you're testing. You write it down as a falsifiable statement — not "people like our idea" but "at least 3 out of 10 freelance designers I talk to will describe client feedback management as a top-3 pain point without me prompting them." The specificity matters. Vague assumptions produce vague tests produce vague results.

Monday through Wednesday, you run the test. For problem validation, that usually means conversations — real ones, not surveys, not "quick question" DMs. You schedule five to ten calls with people who match your ICP profile. You ask open-ended questions about their workflow and their frustrations. You listen for the problem to come up organically rather than leading them to it. If you're testing distribution, you might run a small ad experiment or post in three communities and measure response. If you're testing willingness to pay, you might put up a landing page with a price and a "buy now" button that leads to a waitlist.

Thursday, you look at the evidence. Did the assumption hold? Did it partially hold? Was it completely wrong? Be honest. "Four out of ten people mentioned it, but only as a minor annoyance" is a different result from "seven out of ten people spent five minutes ranting about it unprompted." Both are useful data. Neither should be hand-waved into the answer you were hoping for.

Friday, you update your risk map. Maybe the existential risk got downgraded because you found strong evidence. Maybe it got confirmed as existential and you need to think about pivoting or narrowing your target. Maybe a new risk surfaced that you hadn't considered — that happens often. Someone in a customer call mentions a competitor you didn't know about, or a workflow constraint that makes your approach impractical for a segment you were counting on.

A good week of assumption-testing doesn't always produce good news. Sometimes it produces a "no" — the problem isn't severe enough, the channel is too expensive, the market is too small. That feels like failure but it's actually the most valuable possible outcome at this stage. A clear "no" in week three saves you from a slow, ambiguous "no" in month eight. The founders who win aren't the ones who never hear "no." They're the ones who hear it early enough to adjust.

After a month of this cadence, you have four tested assumptions. Some confirmed, some refuted, some requiring deeper investigation. You have a dramatically clearer picture of your business than the founder who spent the same four weeks building features. And here's the subtle part: the things you learn in these early tests reshape the product itself. The customer calls reveal use cases you hadn't considered. The distribution experiments tell you which positioning resonates. The pricing test tells you who your actual buyer is versus who you imagined them to be. The testing is the product work. It just doesn't look like it from the outside.


Launcherly maps your assumptions, scores them by severity, and tells you which one to tackle next. Start your free trial.