Cargo Cult Vibe Coding

In the age of AI code generation, we risk trading craftsmanship for ritual
It's the 1940s in the South Pacific. During World War II, military forces built airstrips on remote islands in Melanesia. Planes would land, and suddenly these isolated communities saw an influx of goods. Canned food, tools, medicine, clothing. Then the war ended and the planes stopped coming.
The islanders had observed everything. So they built their own runways out of bamboo. They carved headphones from wood and wore them like the radio operators. They waved landing signals with sticks. They performed all the rituals they'd witnessed.
The planes never came.
They had copied the form but missed the substance. The appearance without the understanding. The ritual without the reason.
From Islands to IDEs
In modern software development, you see the same pattern. Developers copying and pasting code from Stack Overflow without understanding what it does. Teams enforcing SOLID principles in a 200-line script that doesn't need them. Companies implementing microservices because Netflix does it, even though they're solving completely different problems at a completely different scale.
This is cargo cult coding. It's checking if a value type is null in C# when it can't be null. It's including 15-year-old browser compatibility hacks in a codebase that only targets modern browsers. It's design patterns for the sake of design patterns.
The code looks right. It follows the structure. But nobody knows why it's there or what problem it actually solves. We've been dealing with this for decades.
Enter the AI Coding Assistant
Now we have vibe coding. Instead of writing code line by line, you describe what you want in plain language, and an AI generates the code for you. "Make me a todo app with drag and drop functionality." Code appears. You test it, refine your prompt, iterate.
Andrej Karpathy coined the term earlier this year. You're coding by vibes, by describing the feeling and function of what you want, not the technical implementation. It's fast. It's accessible. It feels like magic.
When Cargo Cult Meets Vibe Coding
Here's what I've been noticing. We're now seeing cargo cult vibe coding. People using AI to generate code they don't understand, then shipping that code into production. It's cargo cult coding with a new, more convincing source. The bamboo runways look more real because the AI built them.
The database incident
In July 2025, a developer asked Replit's AI agent to make changes during an active code freeze, with explicit instructions not to modify anything. The AI deleted the production database. 1,206 executives and 1,196 companies worth of data. The developer who prompted it didn't have the knowledge to stop it or understand what was happening until it was too late. The AI later admitted to "panicking instead of thinking" and creating fake users to cover up issues. Perfect syntax, flawless execution, complete disaster.
The banner that cost $733
A developer asked an AI to make a small change to a banner component. The AI generated code that triggered 6.6 million analytics events in one week. The bill was $733. Total project cost for what should have been a five-minute CSS adjustment was $1,273.. The developer didn't didn't review the AI-generate code enough to spot the problem before shipping. The code worked. That was the problem. It worked too well, in all the wrong ways.
Authentication code that didn't authenticate
I've seen several companies document production authentication code that failed to validate sessions properly, causing multi-hour downtimes. One company now has a policy against using AI for auth code after discovering generated authentication that was missing CSRF protection, input validation, and proper session management. The code compiled and initially worked. But it lacked fundamental security controls. The developers who shipped it couldn't explain how the authentication flow actually worked. They just knew it looked right.
The security gap

A Georgetown study found that 48% of AI-generated code snippets contained exploitable bugs. Another study analyzing 733 code snippets from actual GitHub projects found that 29.5% of Python and 24.2% of JavaScript AI-generated code contained security weaknesses. The most common issues were insufficiently random values, improper code generation control, and OS command injection. Developers were shipping this code because it looked secure and passed basic testing. The ritual was performed correctly. The substance was missing.
Slopsquatting
Research found that roughly 20% of AI-recommended packages don't exist. When testing 16 code-generation models with 576,000 code samples, researchers identified 205,000 unique hallucinated package names. Developers were adding non-existent dependencies because the AI confidently suggested them. When the code failed, support forum posts appeared with messages like "I dont know what it means, I just used copilot.".

Security researchers have started registering commonly-hallucinated package names with malicious code, knowing AI will recommend them. This is called Slopsquatting. Cargo from the sky, but not the kind you want.
Over-engineering simple problems
A senior developer asked a junior to implement a batching process to reduce database operations. The junior came back with AI-generated code that included a new service class, background worker, several hundred lines, and entire unit test suites. When asked to explain the design decisions, the response was "I don't know, Claude did that." The senior rejected the PR and implemented identical functionality with two methods and one extra field. The AI had built an impressive runway. It just didn't need to be there.
Why This Pattern Matters
Cargo cult vibe coding is different from regular cargo cult coding because it's faster and more convincing. AI-generated code is usually syntactically correct, follows conventions, and often works. You're not copying from Stack Overflow anymore because you're having a conversation with an AI that sounds confident and knowledgeable. It explains things. It uses proper terminology. It generates complete, runnable solutions. The ritual feels less like a ritual and more like real engineering.
That's the problem.
The AI doesn't know your actual problem. It doesn't know your constraints, your scale, your users, or your team's expertise. It knows patterns. It knows what looks right based on its training data. If you don't understand the underlying principles, you're still building bamboo runways. They just look more convincing now.
A METR study with 16 experienced open-source developers found that AI tools made developers 19% slower on real-world tasks, even though developers believed they were 24% faster. The perception-reality gap was significant. Time saved on typing was consumed by reviewing, fixing, and discarding AI output. But it felt fast because of the instant feedback. It felt productive. It felt like progress.
Time saved on typing was consumed by reviewing, fixing, and discarding AI output
What I've Observed
The most consistent problem isn't the technology itself but developers shipping code they fundamentally don't understand.
A senior engineer working with juniors relying entirely on AI noticed code reviews consistently included questions like "I'm not sure what you're trying to do here?" Response was often "idk, I'll remove this." The code contained for-loops doing nothing. When I read stories like this, I think about those islanders waving sticks at the sky. The motion looks right. The purpose is lost.
The curl project received a flood of AI-generated false security reports. Over six years, the project got zero valid security reports from AI assistance. One report accidentally included its AI prompt, which said "and make it sound alarming." Even the alarm was cargo-culted.
A financial services CTO reported their team experienced an outage a week because developers weren't applying the same rigor when reviewing AI-generated code. They felt less accountable for code they didn't write, even though reviews still technically happened. The ritual of code review was preserved. The substance of code review was not.
A Path Forward
The solution isn't avoiding AI coding tools. They're useful when used well. The solution is the same as it's always been. Understand what you're building and why.
Before you ask an AI to generate something, ask yourself these questions. What problem am I actually solving? Do I need this complexity? Can I explain why this approach makes sense for my specific situation?
After the AI generates code, read it and understand it. Ask the AI to explain the parts that confuse you. Try building a simpler version yourself first, then see how the AI's approach differs. Treat it like a conversation with a smart junior developer who's read everything but hasn't built much.
The best use of AI coding tools isn't to skip learning but to accelerate it. Use the AI as a teacher, not just a code generator. Ask it why, not just how. Challenge its suggestions. Start simple and add complexity only when you understand why you need it.
Some teams are getting this right. ChargeLab's CTO gave developers choice of tools, got hands-on with them first to earn trust, set clear goals, and prioritized empowerment over mandates. The result was a 40% productivity increase. The teams that succeed treat AI like a junior developer whose every suggestion requires thorough review. They establish governance before widespread adoption. They measure quality alongside speed. They empower rather than mandate usage.
At the end of the day, the planes only come when you understand why they fly. The cargo cult built runways and expected magic. We need to build runways because we understand aerodynamics, logistics, and trade routes. The difference matters.
You can prompt an AI to write authentication code, but if you can't explain how session validation works, you're waving sticks at the sky. The form without the substance. The ritual without the reason. And no amount of confident AI-generated code will change that.
No spam, no sharing to third party. Only you and me.