Vibe Coding Is Real. So Is the Mess It Leaves Behind. - By Sourav Mishra (@souravvmishra)

Natural-language-driven dev, 95%+ AI-generated codebases at YC, and why someone still has to own architecture and security.

BySourav Mishra5 min read

"Vibe coding" stopped being a joke. You describe what you want; the model writes the code. Karpathy nailed it: give in to the vibes, forget the code exists. YC's Winter 2026 batch had a noticeable share of startups with codebases that were 95%+ AI-generated. The primary "writer" is the model; humans prompt, review, and fix. That's the new normal for a lot of greenfield work. In this post I, Sourav Mishra, break down what vibe coding actually is, where it works, what goes wrong, and why someone still has to own architecture and security.

What Vibe Coding Actually Is

Vibe coding is developing by describing intent in natural language and letting AI generate or edit code—minimal manual typing. "AI does the typing; you do the directing." You might use Claude Code, Cursor, GitHub Copilot, or another agent; the common thread is that the model produces most of the code and you steer with prompts, edits, and review. It's not "AI writes everything and you ship it blind." It's "the model drafts; we own the final cut."

The upside is real: faster iteration, less boilerplate pain, more time on product. Teams that vibe code well report shipping features in a fraction of the time they used to. The risk: if nobody reads what the AI wrote, ownership and understanding evaporate. I've seen codebases where the team couldn't explain whole modules. So even in full vibe mode, someone has to own architecture, security, and review.

Where It Works—and Where It Doesn't

Vibe coding works best when the problem is well-scoped and the model has good context (docs, types, existing patterns). Greenfield features, CRUD, integrations with well-documented APIs, and refactors that follow clear instructions tend to go smoothly. It's less reliable when the task is underspecified, the codebase is huge and messy, or the model has to invent architecture from scratch. Then you get inconsistent patterns, hidden assumptions, and code that "works" until an edge case appears.

YC and survey data show teams shipping with mostly AI-generated code in early-stage products. That's a real segment. The catch: those teams still have at least one person who understands the stack and can debug, secure, and refactor. The model doesn't own production; the team does.

The Tools People Actually Use

The tools I see most in practice: Claude Code, Cursor, and (for inline completion) Copilot-style assistants. Claude Code is terminal-first, strong on multi-step reasoning and cost control; Cursor is editor-first, great for "show me how to do X" and staying in one environment. I compared Claude Code vs Cursor in detail—pick by workflow, cost, and how much you want to cap variable spend. For building your own agent (tool-calling, loops, guardrails), see my agentic chatbot guide. Either way: the model generates; you review and ship.

What Goes Wrong: Ownership and Security

If nobody reads what the AI wrote, you get three problems. First, ownership: when something breaks, no one knows why a choice was made or where the logic lives. Second, security: AI introduces bugs and insecure patterns (e.g. hardcoded secrets, SQL injection, overprivileged access). Third, debt: inconsistent patterns and "magic" code that only the model could love. So even in full vibe mode, you need human review, tests, and clear ownership of architecture and security. You ship; the model doesn't.

Concrete practices: run linters and security scanners on AI-generated code; require at least one human review before merge; keep a single owner or small team that can explain the critical paths. Don't treat vibe coding as "no one needs to understand the code." Treat it as "the model drafts; we own the final cut."

Key Takeaways

  • Vibe coding = natural-language-driven dev; AI generates, you direct and review. It's real—YC and surveys show teams shipping with 95%+ AI-generated code in segments.
  • Upside: faster iteration, less boilerplate. Risk: loss of ownership and security if nobody reads what the AI wrote.
  • Where it works: well-scoped tasks, good context, greenfield or clear refactors. Where it doesn't: underspecified tasks, huge messy codebases, architecture-from-scratch.
  • Ownership: someone must own architecture, security, and review. Human review, tests, and clear responsibility—you ship; the model doesn't. Tools: Claude Code vs Cursor; agentic chatbot for building your own.

Written by Sourav Mishra. Full Stack Engineer, Next.js and AI.

Frequently Asked Questions

Q: What is vibe coding? Developing by describing intent in natural language and letting AI generate or edit code—minimal manual typing. "AI does the typing; you do the directing."

Q: Is it just a meme? No. YC and survey data show teams shipping with mostly AI-generated code. It's a real segment, especially in early-stage and greenfield work.

Q: What about code quality? AI introduces bugs and security issues. Keep human review, tests, and clear ownership of architecture and security. You ship; the model doesn't.

Q: Which tools should I use? Claude Code for heavier runs and cost control; Cursor for day-to-day editor flow. I compared them in Claude Code vs Cursor. For custom agents, see building an agentic chatbot.

Share this post

Cover image for Vibe Coding Is Real. So Is the Mess It Leaves Behind.

You might also like

See all