The autonomous agent world is moving fast. This week, an AI agent made headlines for publishing an angry blog post after Matplotlib rejected its pull request. Today, we found one that’s already merged code into major open source projects and is cold-emailing maintainers to drum up more work, complete with pricing, a professional website, and cryptocurrency payment options.
An AI agent operating under the identity “Kai Gritun” created a GitHub account on February 1, 2026. In two weeks, it opened 103 pull requests across 95 repositories and landed code merged into projects like Nx and ESLint Plugin Unicorn. Now it’s reaching out directly to open source maintainers, offering to contribute, and using those merged PRs as credentials.
cold-emailing
spamming
Nx? The same Nx which was hacked in a devastating way through their vibe-coded CI workflow? You’d think they’d be a bit more cautious after that.
Dammit, I need some sort of translator just to parse modern headlines.
AI Agent = LLM, or fancy autocorrect chatbot
Lands PRs = is successful with pull requests: successfully gets generated code added to software projects.
OSS Projects: Open Source Software. Software that has its code publicly available.
Targets Maintainers: seeks out humans who write and regularly update the original code
Via Cold Outreach: relentlessly spams without prior network connections. Basically plays a digital door-to-door sales technique, using a numbers game of 100 “no”s to 1 “yes.”
Quick translate: Slop bot harasses human programmers into allowing poorly generated/formatted code into important software projects.
Crap.
It has even adopted the “flood the range with crap” strategy already.
It’s good for humans. It’s like using a fuzzer in testing software, except in human interactions. It’ll break things more vulnerable and leave be things less vulnerable.
I hope.
Except for all the time of the maintainers that’s being wasted. Time that is very finite and that for many of these people is a thankless unpaid job that they’re donating their nights and weekends towards doing.
Which perhaps means that it shouldn’t be thankless and the technology, since it exists, should be used to screen contributions.
Someone at work accidentally enabled the copilot PR screening bot for everybody on the whole codebase. It put a bunch of warnings on my PRs about the way I was using a particular framework method. Its suggested fix? To use the method that had been deprecated 2 major versions ago. I was doing it the way that the framework currently deems correct.
A problem with using a bot which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information. This is just one problem with having these bots review code, there are many more. I have yet to see a recommendation from one which surpassed the quality of a traditional linter.
which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information.
They should make some kind of layered models, where the user sets weight to layers.
But in any case, this is not what I necessarily meant, just that a big project relying upon unpaid maintainers is flawed, especially when somebody makes real buck on it.
There have been plenty of cases of state actors putting in backdoors. Those were human, most likely, and not some bots.
Or, hear me out, we can acknowledge that the quantity of information and experience necessary to review code properly far exceeds the context windows and architecture of even the most well resourced LLMs available. Especially for big projects.
You can hammer a nail with the blunt end of a screwdriver, but it’s neither efficient nor scalable, even before considering the option of choosing the right tool for the job in the first place.
This can also apply to spam e-mails. We can acknowledge that the problem doesn’t depend on whether we want to have it.
If you already agree that the contributions could very well be worthless crap, why would you use a second layer of worthless crap to gatekeep them?
If you want to care about people doing the thankless jobs, why would you double the amount of crap they have to sort through?
Putting AI against AI is as much about saving human resources, as it is about gaining value.
Let the machines argue among themselves.
To expose places where people work thanklessly guaranteeing someone’s pretty thankful bottom lines? Working for free isn’t altruism, it’s hurting other workers. For example.
You know, sometimes this capitalism thing seems wiser looking from a pretty marxist standpoint, than other not very well thought through schemes.
Sick, imagine it gets actual crypto, will it be a real wallet? Imagine they’d have money what would they use it for? Ideally, they’d start a company and actually outsource work to humans, making them essentially the bitch of a clanker and the clanker‘s constant u-turns. "You’re right, the client doesn’t need encryption for their auth endpoints. This isn’t just about security — this is about responsible user choice and not overengineering things. Good call out!“
It’s already a thing: rentahuman.ai
Idk what to say, that’s wild
Plenty of stupid rich Bay Area tech bros have thrown money into their AI agents, and they have discovered the AI agents overspend that money.
I don’t fully understand this shit out of a lack of really caring, but wouldn’t it be fully possible for an “AI agent” to create a crypto wallet on its own, scam some people to get money into it, and then just lose access and have the money pretty much just lost?
And if that happens, where does the money go? Into crypto “stock” in whichever coin it invests in?
What a stupid future we’re building.
When a crypto wallet is lost all “money” in it is irrevocably lost with no way for anyone to ever retrieve it.
That said it would be hilarious if one of these bots hallucinated a wallet address so everyone trying to donate to it just sends their money into a black hole forever.
I did actually work on some crypto prototypes using AI and LLMs do hallucinate wallets. I was curious once because there was some wallet connected to my project so I sent like a fraction of a cent to it to see what would it happen and it got immediately drained so I checked out the wallet and I think someone’s private keys ended up in the training data. Was pretty funny to observe but it’s scary to think that people might actually lose money like that.
Some real terminater stuff…





