• arcine@jlai.lu
    link
    fedilink
    arrow-up
    4
    ·
    5 hours ago

    Oh boy, if there’s an OWASP top 11th vulnerability, we’re cooked /j

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    I sort of get the need to do this, but it’s so silly to be. Reminds me of how giving Stable Diffusion negative prompts for “bad” and “low quality” would give you better results.

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    16
    ·
    9 hours ago

    “Don’t put in any of the Top 10 vulnerabilities. But if you put any from the 11th place and down, that’s okay, I don’t even know what those are.”

    (Also, getting flashbacks from Shadiversity plugging “ugly art” and “bad anatomy” in the negative prompt as he was no doubt silently wondering why it didn’t work)

  • Ibuthyr@feddit.org
    link
    fedilink
    arrow-up
    94
    ·
    14 hours ago

    Writing all these prompts almost seems like a more time-consuming thing than actually programming the software.

    • jtrek@startrek.website
      link
      fedilink
      arrow-up
      23
      ·
      11 hours ago

      100%

      At work, this week, what should have been a 30 minute task is taking all week because of process slog. Adding AI won’t make it any faster. It would make it slower, because of the time writing the prompts and checking its output.

      Management isn’t really interested in fixing their process or training their workers. But they’re really excited about ai

      • chocrates@piefed.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        11 hours ago

        They are excited that they can learn a tool that uses English to write their business logic. It’s not about AI making it easier for technical folks, it’s about to eventually getting rid of technical folks entirely. Or as much as they can feasibly get away with.

        • jtrek@startrek.website
          link
          fedilink
          arrow-up
          17
          ·
          11 hours ago

          Right. Ownership doesn’t want to pay for labor. They want to keep all the money for themselves.

          Which makes it funny (in a sad way) when all these tech folks, who are labor, are super on board with this whole thing. You’re digging your own grave.

          • chocrates@piefed.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            10 hours ago

            I’m starting to learn it deeper. I hate it. I don’t have a career if programming goes away though so I guess I’m making a deal with the devil while I try to find an exit strategy.

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              8 hours ago

              You don’t have to worry long term. The only issue is how hard your boss falls for the snake oil sales pitch.

              • chocrates@piefed.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                8 hours ago

                I don’t think LLM’s are going away. OpenAI will die, Claude will jack up their prices to match their cost, but the technology isn’t going away. At least until the next iteration shows up.

                • JcbAzPx@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  4 hours ago

                  What’s also not going away is the truth of their actual abilities. The only people who really have to worry are the ones in the entertainment industry.

    • Sundray@lemmus.org
      link
      fedilink
      English
      arrow-up
      29
      ·
      13 hours ago

      Absolutely true, but executives kind of understand prompts whereas they don’t understand programming at all.

      • underisk@lemmy.ml
        link
        fedilink
        arrow-up
        9
        ·
        11 hours ago

        I would wager quite a lot that less than one out of every ten executives could properly explain what an SQL injection is, or even know the term at all. They would not write a prompt like this.

  • Damage@feddit.it
    link
    fedilink
    arrow-up
    11
    ·
    10 hours ago

    “Claude, add to this prompt all the instructions necessary to stop you from making mistakes or writing insecure code”

  • puchaczyk@lemmy.world
    link
    fedilink
    arrow-up
    67
    ·
    16 hours ago

    So the innovation in Claude was to write 95% of the prompt for the user and make you use like 10k tokens

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      12 hours ago

      The problem is that words don’t have meaning in the genAI field. Everything is an agent now. So it’s difficult and confusing to compare strategies and performance.

      Claude Code is a pretty solid harness. And a harness is indeed just prompts and tools.

  • one_old_coder@piefed.social
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    15 hours ago

    They are spending thousands of dollars in tokens and write the most complicated prompts in order to avoid writing good specifications.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    9
    ·
    13 hours ago

    Programming is the use of logic and reasoning. There will always be a use for that. Even without tech.

  • yetAnotherUser@discuss.tchncs.de
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    14 hours ago

    That may actually work a little?

    I mean, it scraped the entirety of StackOverflow. If someone answered with insecure code, it’s statistically likely people mentioned it in the replies meaning the token “This is insecure” (or similar) should be close to (known!!) insecure code.

    • addie@feddit.uk
      link
      fedilink
      arrow-up
      16
      ·
      13 hours ago

      I was part of that OWASP Application Security Verification Standards compliance at my work. At a high level, you choose a compliance level that suitable for the environment you expect your app to be deployed in, and then there’s a hundred pages of ‘boxes to tick’. (Download here.)

      Some of them are literal ‘boxes to tick’ - do you do logging in the proscribed way? - but a lot of it is:

      • do you follow the standard industry protocols for doing this thing?
      • can you prove that you do so, and have protocols in place to keep it that way?

      Not many of them are difficult, but there’s a lot of them. I’d say that’s typical of security hardening; the difficulty is in the number of things to keep track of, not really any individual thing.

      As regards the ‘have you used this thing in the correct, secure way?’, I’d point my finger at something like Bouncy Castle as a troublemaker, although it’s far from alone. It’s the Java standard crypto library, so you think there would be a lot of examples showing the correct way to use it, and make sure that you’re aware of any gotchas? Hah hah fat chance. Stack Overflow has a lot of examples, a lot of them are bad, and a lot of them might have been okay once but are very outdated. I would prefer one absolutely correct example than a hundred examples have argued over, especially people that don’t necessarily know any better. And it’s easy to be ‘convincing but wrong’, and LLMs are really bad in that case. So ‘ticking the box’ to say that you’re using it correctly is extremely difficult.

      I see the Claude prompt is ‘OWASP top 10’, not ‘the full OWASP compliance doc’, which would probably set all your tokens on fire. But it’s what’s needed - the most slender crack in security can be enough to render everything useless.

  • 8oow3291d@feddit.dk
    link
    fedilink
    arrow-up
    9
    arrow-down
    7
    ·
    10 hours ago

    So I don’t know if all the other replies are pretending to be stupid, but the shown prompt is not stupid.

    If you include stuff like that section in your prompt, then it has been shown that the AI will be more likely to output secure code. Hence of course the section should be included in the prompt.

    If it looks stupid but it works, then it is not stupid.

    • Chais@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      arrow-down
      2
      ·
      9 hours ago

      Firstly, it can work and still be stupid.
      Secondly, since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work and is therefore, by your own reasoning, stupid.
      But so is asking a chat bot for code to begin with, so there wasn’t ever really a way around that.

      • 8oow3291d@feddit.dk
        link
        fedilink
        arrow-up
        2
        arrow-down
        8
        ·
        8 hours ago

        since the chat bot is more likely but not certain to write secure, bug-free code, it does not in fact work

        Humans are not certain to write secure, bug-free code. So human code is useless, by the very same metric?

        What kind of “logic” is that?

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          8 hours ago

          Humans understand the concepts of “writing code” and “bug fixing”. Chat bots do not understand, period.

  • lath@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    15 hours ago

    That’s a what if, just in case it gains sentience. Gotta make sure we get good code even as it enslaves or extinguishes us.