Oh, funny, I also have sentient AI at home that I developed, but choose not to release it. My mom also created one accidentally while baking a cake but it was to powerful and she also decided to best destroy it like it never existed. You know, for everyones safety.
next time you or your mom have a cake you wish disappeared without a trace call me. I’m a… AI researcher
ChatGPT-2 is too dangerous in 2019.
The lack of creativity in this marketing is disappointing…
They didn’t entirely miss the mark there. They publicly released the version after that and the world became worse. That certainly fits for some definition of ‘dangerous’, even tho it’s probably not how they were thinking.
Ya, they were pretty spot on IMO.
And really anthropic is making a very narrow claim:
Mystic is so good at finding bugs that it poses a danger to critical digital infrastructure.
That is not that outlandish a claim. The model is 10-15x more expensive more expensive to run than other flagship models, and if anthropic is being truthful (which is a big if, I’d like to see what they are finding), finding critical vulnerabilities like its nothing.
Makes total sense to stage the rollout privately first so critical infrastructure can be secured before these models are generally available to any attacker.
But I fucking hate their stupid marketing.
Hah I actually remembered this too, and people were still hyping Elon Musk at the time as well.
TBF the researchers knew what they had could be scaled into something gamebreaking which is how we got ChatGPT-3, but OpenAI made it sound like they already had it nailed down several years before it actually blew up. I think their unreleased examples they gave were a newspaper and short story written by AI which they said was indistinguishable from human material.
Bullshit
“Our AI has cost more money that it would take to solve world hunger, tanked the microchip economy, and ruined the lives of thousands of people we’ve had to let go… And it’s stupid as all fucking hell. What do we do?”
“Say it broke containment and it’s too powerful to release. Foolproof!”
This is nonsense and just marketing.
Have you read what they have to say? They make a fairly convincing argument.
Ignore the “containment” framing, they made a hacking bot and it seems to actually be good at finding and exploiting vulnerabilities:
The AI model “found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world,” the company wrote.
Dismiss this as marketing drivel all you want but hacking is just the sort of needle in a haystack problem that AI is very good at. It requires broad knowledge, a lot of cycles trying and failing, and is easily verifiable, ie. Can you execute arbitrary scripts or not. Even if this release is BS good hacking agents are bound to come eventually and we should be discussing the implications of that instead of burying our heads in the sand, pretending AI is useless and that this is all hype.
We need AI or else we’ll have nothing to protect us from… AI.
It’s an arms race like any other. Cybersecurity has always been an arms race. You can’t stop developing security patches, cause adversaries will continue developing new exploits.
If AI enables your adversaries to develop exploits faster than human developers can keep up with, then yeah AI will have to be a part of the solution. That doesn’t mean vibe-coding security patches, but it could mean AI-driven pen-testing.
Just like quantum computing. You can call it useless and impractical all you want, but some day someone is going to use it to break conventional encryption. So it would behoove you to develop quantum capabilities now, so that you have quantum safe encryption before quantum-based exploits eventually arise, as they inevitably will…
Shit, i guess we better rewrite EVERYTHING in RUST!
AI exploit mining is one of the only things it’s good for. It doesn’t have to be accurate it just has to keep trying variations of common flaws and it has tons of training data on how the system is interconnected. we’re going to have so many RCEs and LPEs the next few years but people are also gonna burn 100k in tokens to find exploits worth 3k so efficiency will be interesting
I agree. Selling an AI that can find vulnerabilities in software is probably the second best thing after achieving AGI.
“Nice software you’re selling there. Would be a shame if it was suddenly very unsafe to use, don’t you think?”
I wrote an incredibly powerful “AI”. I call it the “Super Intelligent brute force password hacker”… It’s so smart that it knows almost every password. Humanity stands no chance.
Have you seen the most incredible file system called pifs?
https://github.com/philipl/pifs
It literally stores every single file ever created or will be created for the existence of all the universe.
Thanks for bringing this gem to my awareness! :D
Man, I’ll start telling that to my boss whenever I miss a deadline. “Sorry boss, the code I made is too powerful, we can’t release it”
Like my dick

Is the powerful AI in the room with us right now?
crazy that the AI companies big selling point is always “our new model is TOO POWERFUL, it’s gone rampant and learned at a geometric rate, it enslaved six interns in the punishment sphere and subjected them to a trillion subjective years of torment. please invest, buy our stock”
Roko’s basilisk wasn’t meant to be a brag!
Impressive marketing spin on “our product and deployment strategies are wildly insecure.”
But can it start a timer
How would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
How does the LLM check the timestamps without a prompt? By continually prompting? In which case, you are the timer.
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
That’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.
Edit to add: I know I’m not replying to the bad mansplainer.
LLM != TSR
Do people even use TSR as a phrase anymore? I don’t really see it in use much, probably because it’s more the norm than exception in modern computing.
TSR = old techy speak, Terminate and Stay Resident. Back when RAM was more limited (hey and maybe again soon with these prices!) programs were often run once and done, they ran and were flushed from RAM. Anything that needed to continue running in the background was a TSR.
Please tell me why you believe that the LLM keeps being executed on your chat even when the response is complete.
AI companies do this same tired schtick every time they release a model. If only they realized how amateurish it makes them look.
Grifters gonna grift.
Remember when Scam Altman posted a picture of the Death Star to explain how scary GPT5 is? lmao these people are all such cretins and I hate them to the last.
Scam Altman
😆
Does “it broke containment” mean it didn’t have permissions to anything and still managed to delete all the files it could find?
Roughly
No, its not too powerful. Its too chaotic. You cant control it.
EDIT: It seems I have misunderstood. I thought containment here referred to the harness, but they meant VM type of containment. I am still quite skeptical, but it looks like this model is quite good at finding and utilizing security flaws in software.
It may have blurted out something like “hey I know exactly how to end this economic suffering and all diseases globaly ! Its easy you just need to…”
Quick Hit the Red Button!!! Shut it OFF!!!
It was actually very well aligned
What do you mean?












