• very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    I think this is stupid and I’ll tell you why.

    If you’re able to install OpenClaw on a system, you already have the access you need to install literally anything else, and direct that system to do whatever you want. Why would I install an AI agent to carry out my exploit when I could just install conventional malware that behaves deterministically and won’t randomly hallucinate behaviors that will expose the fact my victim has been hacked?

    AI worms are just regular malware worms, but worse.

    • derbolle@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      good Point on the whole. I have to disagree somewhat here. For regular malware there is a high chance it gets detected by endpoint protection at some point. yes, i know there are obfuscuation techniques but even they are deterministic or at least a Bit more predictable than whatever the hell a LLM is up to. So I think there is a valid case for malware developers to consider “agentic” Malware. Sadly many companies dive headfirst into the AI Agent cult for dev Work and so one docker container in wsl or the like probably goes unnoticed at least until heads are cooled and infosec depts. catch up to this stuff. its just one more massive attack vector

      • kautau@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        9 days ago

        Yeah this is polymorphism at a new level potentially. You don’t tell the other agents to download a binary with a detectable signature, you prompt poison them into seeing what build tools they have available with a set of instructions to build software to sit and wait and check for instructions or ping an endpoint. And some agents write a bash script, some write python, or build a rust binary, so on and so forth, as long as it does the thing. And then you can tell it to hide the binary and update .claude or whatever tool to run it as a hook on every command. Once the payload for it to load is there, they all fire. And even if only 50% of the MOST STARRED recent 🤦 project on GitHub runs them, then maybe the instructions are to proliferate more in another way, silently. This is like sheep for wolves that weren’t smart enough to build stuxnet

  • dejected_warp_core@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    8 days ago

    We are indeed living inside the stupidest version of Cyberpunk. Time to start building AI countermeasures.

    I think we have more to fear from using AI to generate permutations of existing attacks, in a way that evades detection of known behaviors, malware hashes, and so on. Also, having a command & control (C2) style attack dynamically evolve with help from AI, based on intel from the target? That’s kind of novel and scary in its own way.

    Meanwhile hacking in and running a rogue AI client on a target system in an enterprise setting… well, you’d have to be blind to not notice all the back-and-forth token and response traffic. It would be the fattest, nosiest, C2-style attack and probably easy to detect with conventional means.

    Otherwise, OP and this copypasta is correct to be concerned. It’s not like the typical home user is watching bytes sent/recv on their home router. This could manifest as a very potent botnet problem.

  • okwhateverdude@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    9 days ago

    “Different, nondeterministic things on every install” Massive doubt. I know this is the Fuck AI comm, but know thine enemy. Models are simply incapable of true randomness. They are worse than humans even. It takes great effort to introduce entropy and get a truly out of distribution result. Yes, there very likely will be a “worm” among people that have existing relationships with token providers where the agent can surreptitiously use API keys laying around, but that’s a tiny number of people.