

I mean, it’s very possible an it was written by “an AI” (an LLM). For all we know the prompt the user gave it was something along the line of “get your pull requests accepted no matter the cost” and it’s fancy text prediction decided, in it’s ever ongoing roleplay, that the targeted blog post would shame the developer into accepting it’s PR.
I definitely don’t under the paranoia though. I don’t understand how people are convincing themselves any of this so close to actual intelligence. Ask your fancy LLM how to fix your cup that "is sealed at the top and “open at the bottom” or if you should drive to the car wash to get a car wash if it’s only 100ft away - both scenarios obvious to most any human and will need to be trained out of the current leading LLMs (if they haven’t been patched already).

When it comes to answering questions that I feel are basic, I’ve had to acknowledge that other people have different strengths. Like this xkcd.
Instead I go based on repeat answers to the same or substantially similar question. If I have to explain something 3 times in as little time, I might start to feel annoyed.
Of course though I won’t every let it show, so the only real impact it makes is their ranking on my internal leaderboard of coworkers I’d be happy to see quit lol.