2 Failed Products, 0 Customers, and the Tool I Built to Fix the Pattern
A founder's honest account of two products that never launched — and the prioritisation gap that killed them both. Why generic advice fails early-stage founders.
Paul Merrison
Founder, Launcherly
I have a confession. I've built two products that never made it to market.
The first was a compliance automation tool — a better Vanta, built off the back of my day job as a CISO. I knew the problem intimately. I could build the solution. So I did what every sensible founder does: I read The Mom Test. Then Ready, Fire, Aim. Then Pitch Anything. Then The Diary of a CEO. I watched every Alex Hormozi video on sales. I did YC Startup School. I spent hours on Reddit looking for the thing I was missing.
I was convinced I just needed to learn one more thing.
I never launched it.
The second was productised consulting around AI governance. This time I'd done the reading. I had a website, a deck, a positioning doc. I followed the advice I was getting from AI tools about finding clients on LinkedIn. I posted, I reached out, and I got nothing. Not even polite rejections. Just silence.
What actually went wrong
Looking back, the problem was the same both times. Everything I was learning was generic. Books give you frameworks, not answers. AI tools give you advice, then forget you exist the next session. Reddit gives you a hundred opinions from people who don't know your situation.
For product #1, I needed someone to say "stop reading and go talk to 5 CISOs this week." For product #2, I needed someone to say "LinkedIn outreach doesn't work for consulting without a warm audience first. Build the audience." Both of those are obvious in hindsight. Neither appeared in the generic advice I was consuming.
The gap wasn't information. It was prioritisation. Knowing what to do next, specifically, given where I actually was.
The information-to-action gap
This isn't just a me problem. Every founder I've talked to describes some version of it. They can list ten things they could be working on. They've read the books, done the courses, asked the AI tools. They have more advice than they know what to do with.
What they can't do is confidently rank those ten things by impact. Not because they're stupid — because the ranking depends on context that no generic resource has access to. Your stage, your evidence, your constraints, what you've already tried and what happened. A fresh ChatGPT session doesn't know any of that. A blog post certainly doesn't.
The standard fix is "get a mentor." And good mentors are genuinely valuable. But even the best mentors bring their own gravitational pull — a B2B SaaS founder mentoring a consumer startup will unconsciously steer toward B2B patterns. They'll ask about your sales pipeline when you don't have one. They'll suggest enterprise pricing tiers when your users expect a free tier. The more experienced the mentor, the stronger the pull toward their own playbook.
More fundamentally, mentors can't be there for the hundreds of small decisions you make between meetings. "Should I spend today on outreach or product?" is the kind of question you face daily, and nobody is available to answer it in real-time.
So I built the thing I wished existed
Launcherly started as a selfish project. I wanted the tool that could have saved my first two products — something that understood my specific situation and told me what to work on next.
The core idea: a team of AI agents (Growth Lead, Research Lead, Strategic Advisor, and a few others) that share context about your business and work with you over time.
Two things make this different from asking ChatGPT.
First, context persists. When the Growth Lead suggests a channel strategy, it knows you're selling to 500 accounting firms, not building the next Slack. It knows you tried cold outreach last month and it didn't work, so it stops suggesting cold outreach. Sounds trivial. In practice, every other AI tool I've used starts from zero every session.
Second, it sequences. Instead of "here are 47 things you could do," it tracks where you are in the founder journey and says "you haven't validated demand yet, and that's the thing that'll kill you. Here's a specific experiment to run this week."
3 things I didn't expect from the beta
I've been running a beta with a small group of founders for the last four months. Some of what's emerged was surprising.
Generic playbooks actively hurt at the early stage. "Build an MVP" is correct for maybe 60% of business models and wrong for the rest. Try building a marketplace MVP before you've proven liquidity on either side. Or launching freemium when your total addressable market is 5,000 companies — you can't afford to convert 2%, you need to convert 20%. Context changes the answer completely, and the default AI response is always the most common playbook.
Founders don't lack information, they lack synthesis. Every beta user can describe 10 things they could work on. Almost none of them can confidently rank those 10 by impact. The value isn't giving them idea #11. It's helping them see that ideas #3 and #7 are the only ones that matter right now.
The "just ask AI" workflow has a ceiling. I use Claude and ChatGPT constantly. They're great for answering questions. They're bad at saying "you're asking the wrong question." That requires knowing your history, your stage, your constraints. A fresh session can't do that.
The deeper lesson
The two products I failed to launch weren't bad ideas. Product #1 would have been competitive — I knew the compliance space inside out. Product #2 had genuine demand — AI governance consulting is a real market. What killed them wasn't the idea or the execution capability. It was the gap between knowing what to build and knowing how to get it to market.
That gap is where most technical founders get stuck. We're comfortable with the building. We're lost on the business side. And the resources available to help us — books, courses, AI tools, podcasts — give us more information when what we actually need is less information and more direction.
Prioritisation isn't a productivity hack. It's the core skill of the early-stage founder. And it depends entirely on context that generic advice can't provide.
Launcherly closes the gap between generic advice and specific action — tracking your assumptions, evidence, and risks so you always know what to work on next. Start your free trial.