Open Source Is Dying and Vibe Coding Is Holding the Pillow

NativeFirst Research 13 min read
Open Source Is Dying - The crisis of AI-generated contributions overwhelming maintainers

A year ago today, Andrej Karpathy posted a tweet while presumably still in his pajamas. He called it “a shower of thoughts throwaway tweet.” It went viral in about eleven minutes.

“There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

4.5 million views. Collins Dictionary’s Word of the Year. A whole generation of developers nodding enthusiastically. And an open source ecosystem that’s now gasping for air like it ran a marathon it didn’t sign up for.

Look, before the pitchforks come out: we use AI coding tools every day. Claude Code is basically our third co-founder at NativeFirst. We’ve written entire blog posts about how Opus 4.6 is the best thing since syntax highlighting. We are not the “AI bad, return to monke” crowd.

But something genuinely scary happened in the last eight weeks. And we think you should know about it.


The Damage Report

Not opinions. Not “a thought leader’s perspective.” Just stuff that actually happened since New Year’s.

cURL Killed Its Bug Bounty (Yes, That cURL)

Daniel Stenberg — the guy behind cURL, the command-line tool that runs on literally every connected device you own including your smart fridge — shut down cURL’s bug bounty program on HackerOne. Done. Gone. Effective February 1, 2026.

Why? Because AI-generated vulnerability reports hit 20% of all submissions, and the valid-report rate nosedived to 5%. That’s a 95% garbage rate. Imagine your email spam filter broke and only let through five real emails out of every hundred. Now imagine each spam email requires a security expert to manually verify it’s actually spam. That was Stenberg’s life.

His exact words: “The main goal with shutting down the bounty is to remove the incentive for people to submit crap.”

After $86,000 in total bounty payouts over the years, cURL now accepts vulnerability reports through GitHub only. No money. No reward. Just the warm fuzzy feeling of helping. Turns out the fastest way to find out who actually cares about security is to stop paying for it. Depressing? Sure. But also kind of darkly funny.

Tldraw Said “Nah, We’re Good” to All External PRs

January 15, 2026. Steve Ruiz, creator of tldraw (the infinite canvas SDK that designers love), drops this absolute banger:

“This week we’re going to begin automatically closing pull requests from external contributors. I hate this, sorry.”

The “I hate this” part is important. This wasn’t some power trip. The guy genuinely hated doing it.

Here’s what happened: AI-generated PRs started flooding in, and they weren’t the obvious garbage kind. They looked good. Tests passed. Code compiled. Even the commit messages were well-written. The tldraw team actually merged a few before they noticed the red flags — authors ignoring templates, PRs abandoned after the CLA request, suspiciously brief gaps between commits, and (our favorite) authors with dozens of PRs across dozens of completely unrelated projects. Because nothing says “I’m passionate about your infinite canvas SDK” like also contributing to a cryptocurrency exchange, a recipe app, and a Kubernetes operator in the same afternoon.

But here’s the part that genuinely broke our brains. Ruiz had been using Claude Code to turn quick notes like “fix sidebar bug” into well-formed GitHub issues. Other people’s AI tools were picking up those issues and auto-generating PRs. AI was writing the issues. AI was writing the fixes. And humans were stuck doing all the reviewing. It’s like ordering a pizza, having a robot deliver it, and then being forced to quality-check every ingredient while the robot tips itself.

Ruiz dropped the mic with: “When code was hard to write and low-effort work was easy to identify, it was worth the cost to review the good stuff. If code is easy to write and bad work is virtually indistinguishable from good, then the value of external contribution is probably less than zero.”

Less. Than. Zero. From a maintainer of one of the most respected open source projects on GitHub. Let that sink in for a second.

Ghostty: “You Will Be Permanently Banned and Publicly Ridiculed”

Mitchell Hashimoto doesn’t mess around. The co-founder of HashiCorp, creator of Terraform, and current builder of Ghostty (a GPU-accelerated terminal emulator) took a different approach: he went full school principal.

Step one (August 2025): mandatory disclosure. If you used AI in any form while contributing to Ghostty, you had to say so. Simple enough.

The result? Fifty percent of all pull requests included an AI disclosure. Half. We’ll just let that number sit there for a moment.

Step two (January 2026): zero tolerance. Ghostty’s AI_POLICY.md now reads like it was written by someone on their third coffee after reviewing their hundredth bad PR:

  • All AI usage must be disclosed, including the specific tool
  • You must fully understand all code you submit — if you can’t explain it without AI, don’t contribute
  • AI-generated PRs are only allowed for accepted issues
  • Drive-by contributions get closed immediately
  • Repeat offenders will be permanently banned

The beautiful irony? Hashimoto uses AI tools himself. His maintainers are exempt from these rules. It’s not about AI being bad — it’s about people submitting code they don’t understand, can’t defend, and won’t maintain. They’re not contributing. They’re littering.

This is the one that should terrify you if you work in open source or, honestly, if you work on the internet at all.

January 6, 2026. Adam Wathan, founder of Tailwind CSS — the framework that styles approximately half of everything you see on the web — laid off three of his four engineers. In a GitHub comment that went mega-viral, he wrote:

“The reality is that 75% of the people on our engineering team lost their jobs here yesterday because of the brutal impact AI has had on our business.”

And then, the line that should be studied in every business school for the next decade:

“Tailwind is growing faster than it ever has and is bigger than it ever has been, and our revenue is down close to 80%.”

Read that again. More popular than ever. Revenue down 80%. That’s not a typo and it’s not an exaggeration.

The mechanism is so simple it’s almost elegant in its cruelty. Tailwind made money because developers visited their docs, discovered commercial products like Tailwind Plus, and sometimes bought them. AI coding assistants now answer Tailwind questions directly in the IDE. Why visit the docs when Cursor already knows flex items-center justify-between? Docs traffic dropped 40% since early 2023. The funnel that paid the bills evaporated.

Someone even submitted a PR to create an llms.txt file — basically a machine-readable version of the docs optimized for AI. Wathan closed it. Can you blame him? That’s like asking a restaurant owner to install a vending machine in front of their entrance.

Wathan’s forecast: six months until they couldn’t make payroll.

After the story broke, tech companies tripped over each other to offer sponsorships. Vercel, Google AI Studio, Gumroad. The Hacker News thread hit 1,100 points. Very heartwarming. Also, band-aids on a broken business model.


The Spreadsheet of Doom

Let’s zoom out and look at the numbers. They’re not great.

Stack Overflow activity is down 25% since ChatGPT launched. The platform that taught an entire generation of developers how to center a div is hemorrhaging users. Nobody’s asking questions because they’re asking AI. Nobody’s answering because nobody’s asking. It’s the circle of death, but for knowledge sharing.

60% of open source maintainers are unpaid. This was true before AI. Now they’re unpaid AND drowning in AI-generated garbage.

60% of maintainers have quit or considered quitting. That number went up from 58% in just one year. 44% cite burnout. The percentage saying they’re not getting paid enough jumped from 32% to 38%.

Kubernetes retired Ingress NGINX due to maintainer burnout. Not deprecated — retired. No more security updates. One of the most widely used networking components in the entire cloud ecosystem, and nobody’s home. If you’re running it (and statistically, you probably are), congrats, you’re on your own.

Someone described this whole situation as “a denial of service attack on human attention.” We’ve been trying to come up with something better for two weeks. We can’t.


”Yes But We Love Vibe Coding” (We Know, Us Too)

Here’s where it gets awkward.

We vibe code. We freakin’ love it. We’ve published thousands of words on this very site about how Claude Code and Opus 4.6 changed our entire iOS development process. We’ve sent coding commands from the tram. We’ve refactored networking layers while pretending to listen in meetings. (Sorry, meetings.) We meant every word.

But there’s a world of difference between vibe coding in your own codebase — where you understand the architecture, where you actually read the diff before merging, where the consequences are yours — and vibe coding at someone else’s project. The first is a power tool. The second is graffiti with a fancier spray can.

And that’s exactly what’s happening. People point AI tools at open source repos, generate PRs they couldn’t explain at gunpoint, and bounce. They get a nice green square on their GitHub contribution graph. The maintainer gets to spend their unpaid Saturday evening triaging hot garbage. Win-win! (For one person.)

Even Karpathy noticed. A year after naming the beast, he’s now pushing “agentic engineering” as the proper term for what professionals do — stressing “there is an art and science and expertise to it.” Vibe coding, in his updated framing, is for throwaway weekend projects. Not for production. And definitely not for other people’s repos.

The Study That Made Us Quietly Close Our Laptop and Go for a Walk

In July 2025, METR — a well-respected AI evaluation org — published a randomized controlled trial. They tracked 16 experienced open source developers doing 246 real-world tasks on codebases they’d worked on for an average of 5 years. Big repos, million-line codebases, the real deal.

The finding: when allowed to use AI tools, developers were 19% slower.

Not faster. Slower. With AI. On their own code.

But wait, it gets better (worse?). Before the study, developers predicted AI would make them 24% faster. After the study — after being measurably, provably slower — they estimated AI had made them 20% faster. The vibes were so good they couldn’t feel reality.

METR’s researchers, who presumably exchanged several concerned glances: “When people report that AI has accelerated their work, they might be wrong.”

The study used Cursor Pro with Claude 3.5/3.7 Sonnet. Not bad tools at all. But the developers spent so much time reviewing, fixing, and second-guessing AI output that the net effect was negative. The researchers called it “extra cognitive load and context-switching.” We call it “the part where you stare at AI-generated code wondering if it’s subtly wrong or just differently correct.”

Caveats exist. These were experts in familiar codebases — the hardest scenario for AI to help with. Newer devs in unfamiliar territory might benefit more. But it blows up the narrative that AI automatically makes everyone faster at everything. Sometimes you’re just slower, with extra steps, and a pleasant feeling of productivity that doesn’t match reality. Like running on a treadmill and wondering why you haven’t arrived anywhere.


The BBC Reporter Who Got Hacked Through Vibes

This one reads like a comedy sketch but it actually happened.

February 2026. Cybersecurity researcher Etizaz Mohsin wants to demonstrate a point to BBC journalist Joe Tidy. Tidy downloads Orchids — a vibe coding platform claiming a million users — and asks it to build a simple computer game based on the BBC News website. Just vibing.

Mohsin exploits a security flaw to access Tidy’s project, injects a single line of code somewhere in the thousands of AI-generated lines, and takes control of the laptop. A notepad file titled “Joe is hacked” appears on the desktop. The wallpaper changes to an image of an AI hacker. Zero clicks from Tidy. He didn’t approve anything. He didn’t download anything. He just… vibed. And got owned.

This is what “forget that the code even exists” looks like when applied to something that actually runs on a computer connected to the internet. The code doesn’t care about your vibes. It still executes. It still has security holes. And if nobody reads it — not the “developer,” not the AI, not a reviewer — then those holes just sit there, wide open, waiting.


Okay So What Do We Actually Do

We’d love to wrap this up with a five-point plan that fixes everything. We can’t. Nobody can. But here’s what’s clearly not working and what might help.

GitHub needs to build actual tools for maintainers. Yesterday. Steve Ruiz called his auto-close policy “temporary until GitHub provides better tools for managing contributions.” GitHub makes an ungodly amount of money. Their Copilot product is trained on open source code. The least they can do is give maintainers a way to filter the AI slop that their own product is helping generate. The irony is almost too thick to spread on toast.

“Contribution = code” needs to die. Ruiz nailed this: maybe the most valuable contribution in 2026 isn’t code at all. It’s identifying and clearly describing problems. Good bug reports. Reproducible test cases. Thoughtful documentation. Let the maintainers — who actually know what’s going on — decide how to fix things, with or without AI. A well-written bug report is worth ten drive-by PRs.

Companies that profit from open source need to pay for it. This isn’t charity, it’s self-preservation. If your company uses Tailwind and your AI tools are why Tailwind can’t pay engineers, you’re freeloading with extra steps. The Open Source Pledge exists. Use it. Or don’t, and watch the tools you depend on slowly die. Your call.

Stop pretending vibe coding is free. We’re not going to stop using Claude Code. We’d be hypocrites to suggest you should. But every time an AI answers a question you’d have Googled, every time it generates code you’d have learned from the docs, every time it writes a PR you’d never have submitted yourself — someone somewhere pays a cost. A maintainer reviews garbage. A business loses traffic. A junior developer skips learning something they’ll need later. Vibing is fun. But the bill always comes.


No Summary, Just a Feeling

We started researching this post because of the cURL bug bounty. One story. Should’ve been 800 words and done by lunch.

Instead we spent two weeks falling down a rabbit hole of collapsing ecosystems. cURL. Tldraw. Ghostty. Tailwind. Stack Overflow. Kubernetes. The METR study. The BBC hack. Each story alone is a red flag. Together they look less like a red flag and more like a red carpet — leading straight to a cliff.

The people who maintain the software infrastructure that the entire tech industry runs on — for free, on their weekends, in exchange for nothing but grief — are drowning in AI-generated noise. And the industry’s response has been to build louder noise machines.

Karpathy was right about one thing: there is a new kind of coding. But the vibes aren’t reaching the people holding the pillow over open source’s face. For them there are no vibes. Just more PRs to close, more reports to triage, and one more Tuesday evening wondering if any of this is worth it.

The code still exists. The question is whether the humans behind it will.

Share this post

Share on X LinkedIn

Comments

Leave a comment

0/1000

N

NativeFirst Research

Team Member

Writer at NativeFirst, contributing to the blog and helping build great Apple apps.