200,000 Vibe-Coded Projects Launch Every Day. Almost None Get Customers.

Everyone is celebrating how fast you can build. Nobody is asking whether you should.

Andrej Karpathy coined the term "vibe coding" in February 2025 — a style of building where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists."

Fourteen months later, it's become the default way a generation of builders ships software. Over 200,000 new vibe-coded projects launch every single day on Lovable alone — a figure the company disclosed alongside reporting $400 million in ARR, doubled from $200 million at the end of 2025. 63% of the people building them aren't even developers.

And almost none of these projects get a single paying customer.

Greg Isenberg put it bluntly in a viral post:

He's right that distribution is a bottleneck. But the real problem starts way before distribution.


The Execution Trap

Here's the pattern playing out 200,000 times a day.

A founder has a vague idea. They open Cursor or Claude Code, describe what they want, and three hours later they have a working app. It looks good. It functions. They deploy it.

Then nothing happens.

Not because the product is bad. Not because they can't find distribution. But because they never did the thinking that separates a working app from a product someone will pay for.

They skipped:

  • Who exactly is this for? Not "developers" or "small businesses" — which specific person, with which specific pain, at which specific moment?
  • Why would they switch? What are they using right now, and why would they stop?
  • What's the wedge? Of all the possible things you could build, why this feature first?
  • What's the pricing logic? Not "I'll figure it out later" — what's the value equation that makes someone pull out a credit card?

These aren't execution questions. They're thinking questions. And no AI tool currently helps you answer them well.

Y Combinator's W25 batch made this painfully visible: 25% of applications contained codebases that were 95% or more AI-generated. The tools have democratized building. They haven't democratized judgment.


The Speed Paradox

Here's where the data gets uncomfortable.

A recent audit of a vibe-coded SaaS product — one that looked polished and passed every functional test — found nine critical issues: GDPR non-compliance, zero automated tests, undocumented business logic, no data export mechanisms, and missing audit logs. None of these surface in a demo. All of them kill enterprise deals.

It's not an isolated case. Lovable itself was hit with a critical CVE (CVE-2025-48757, CVSS score 9.3) after researchers found that 10.3% of audited apps — 170 out of 1,645 — had row-level security flaws that allowed unauthenticated attackers to access or modify entire databases. Lovable's response? Individual customers bear responsibility for protecting their own data.

The broader numbers paint a consistent picture:

  • 45% of AI-generated code samples contain at least one security flaw (Veracode, 2025 — analysis of 100+ LLMs across 80 coding tasks). For context, human-written code isn't immune either — but AI-generated code has measurably higher rates across every category.
  • 1.7x more issues overall in AI-assisted pull requests, with 2.74x more cross-site scripting vulnerabilities specifically (CodeRabbit, analysis of 470 open-source PRs)
  • 41% increase in code complexity in AI-assisted projects
  • 19% longer to complete tasks for experienced developers using AI tools on mature, large-scale codebases (METR, randomized controlled trial — 16 developers, 246 real issues across repos averaging 1M+ lines of code). The perception gap was striking: developers expected to be 24% faster, but were actually 19% slower. Important caveat: this study focused on mature, well-established repos with strict quality standards. On greenfield projects, AI tools likely provide genuine speedups.

Vibe coding doesn't eliminate work. It front-loads the easy part and backloads the hard part.


"But Isn't Fast Iteration the Whole Point?"

There's a reasonable counterargument: vibe coding IS the thinking tool. Build fast, ship to real users, learn from real data, iterate. Isn't that just lean startup methodology on steroids?

Yes — when you're validating demand. If you're testing whether anyone wants a particular solution, a quick prototype in front of real users is the fastest path to truth.

But that's not what most vibe coders are doing. They're not running experiments. They're building finished products — complete with landing pages, pricing, and payment integration — for problems they haven't validated. They're not iterating based on user feedback. They're shipping v1, seeing no traction, and moving on to the next idea.

Fast iteration requires a hypothesis to test. Without one, you're not iterating. You're just producing.


What Karpathy Accidentally Revealed

The man who coined "vibe coding" may have also revealed its biggest flaw.

Last week, Karpathy posted about spending four hours having an LLM polish his blog post. He felt great about it. Then he asked the LLM to argue the opposite — and it demolished his entire argument.

The post went massively viral.

The lesson isn't about blog posts. It's about every decision you make with AI assistance.

AI defaults to confirmation. Ask it to build something, and it builds it beautifully. Ask it to validate your idea, and it validates enthusiastically. Ask it to write a PRD, and you get the most professional-sounding PRD you've ever seen.

You can prompt it to push back — Karpathy eventually did. But the default mode, the path of least resistance, is agreement. And most people never deviate from the default.

This is the core problem with vibe coding: the tool is optimized for execution, not judgment. And when execution is essentially free, judgment becomes the only thing that matters.

The Missing Layer

Think about how a successful product actually gets built:

  1. Thinking — Understanding the problem, the market, the user, the competitive landscape
  2. Deciding — Choosing what to build, what to cut, what to sequence
  3. Building — Writing code, designing UI, creating content
  4. Distributing — Getting it in front of people who need it

AI has made Step 3 nearly free. Greg Isenberg and others are working on Step 4 — MCP servers as distribution channels, programmatic SEO at scale, answer engine optimization.

But Steps 1 and 2? Massively underserved. Yes, tools like Notion AI and Miro AI are adding intelligence to existing workflows. But they're bolting AI onto tools designed for a pre-AI world. The thinking layer — where scattered context gets synthesized into clear decisions — doesn't have its defining product yet.

That's 200,000 projects a day jumping from a vague idea straight to Step 3 — skipping the two steps that determine whether anyone will care.

Industry observers are predicting a significant shakeout within 12 to 18 months, as seed-funded vibe-coded startups hit scaling walls — code that can't handle thousands of concurrent users, security audits revealing critical findings, and codebases that new engineers can't understand. The demos looked great. The infrastructure didn't.


What Actually Works

The founders who break through the noise aren't the ones who code faster. They're the ones who think more clearly before they code.

The solo builders who actually make money from vibe-coded products share a common pattern. They don’t start with code. They start with an audience, survey their needs, validate demand, and only then build — often in 24-72 hours. The product comes last, not first.

The thinking happens first. The vibe coding happens second.

Here's what that looks like in practice:

Before you write a single line of code:

  • Talk to 5 people who have the problem you're solving. Not friends — strangers who experience the pain.
  • Find the existing solution they're using. Understand why it's not good enough.
  • Identify the single feature that would make them switch. Not 10 features — one.
  • Figure out your distribution channel before you build the product.

Then vibe code the hell out of it.

The combination of clear thinking + fast execution is lethal. Either one alone is worthless.

Smart teams are already adapting. Regulated industries are limiting vibe coding to internal tools and prototyping, while maintaining human-led development for customer-facing systems. The emerging playbook: AI generation for low-risk applications, code review for medium-risk, human-led development with AI assistance for high-risk systems.


The Opportunity Nobody Sees

Here's what excites us about this moment.

Everyone is racing to make Step 3 faster. Cursor, Lovable, Replit, Claude Code — billions of dollars making coding easier.

A few smart people are working on Step 4. Distribution tools, SEO automation, AI-powered marketing.

Steps 1 and 2 remain the biggest gap. (It’s why we’re building WhiteboardX — but that’s a story for another post.) The thinking layer. The decision layer. The place where you go from "I have an idea" to "I know exactly what to build and why."

The next wave of valuable AI tools won't help you build faster. They'll help you think harder. And the builders who figure that out first will have an unfair advantage — not because they ship more, but because they ship right.