I worked as a software engineer.
AI is supposed to replace programmers, or at least help you write code.

But I never really wrote a lot of code in the first place??
I looked up libraries that do what I need and then wrote a bit of code in-between to link our API or GUI to the right functions of the selected library.

And these libraries were tested, functional and most of all consistent and reliable.

Now what do you want me to do? Ask an non-deterministic LLM to implement the code from scratch every time I need it in my project?
That doesn’t makes sense at all.

That’s like building a car and every day you ask somebody else to make you a new wheel. And every wheel will be slightly different than the previous. So your car will drive like shit.

Instead, why not just ask a reputable wheel manufacturer to make you 4 wheels? You know they will work. And in the case of programming, people are literally giving away good, reliable wheels for free! (free libraries and APIs)

Why use LLMs at all?

  • lime!@feddit.nu
    link
    fedilink
    arrow-up
    28
    ·
    6 days ago

    there is one area where it excels though: bullshitting. that’s why c-levels and aspirational middle management are so impressed, because their roles are all about bullshit.

    • homes@piefed.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      6 days ago

      I argue, it’s just that people who operate at those levels are terrible at detecting AI bullshit. If you spend more than the bare minimum of effort (or intelligence) at trying, it’s pretty obvious when you’re reading AI slop.

      So, maybe it’s useful for that, but not particularly better at it than a human.

      • lime!@feddit.nu
        link
        fedilink
        arrow-up
        10
        ·
        6 days ago

        yeah some people seem extremely susceptible.

        i will admit that my detection skill has been improved by using local models, because i studied machine learning at uni twelve years ago and jumped at the opportunity when the hype cycle began. but it just hasn’t gotten good at anything concrete. it improves marginally at certain tasks, only to fail in more subtle ways every time. it’s getting better not at being a tool, but at disguising itself as one.

        • homes@piefed.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          6 days ago

          Yeah, it all seemed so very promising back then, but those promises really never seemed to materialize… I’m just so disappointed.

          At least I didn’t invest billions of dollars into it.

          • lime!@feddit.nu
            link
            fedilink
            arrow-up
            5
            ·
            6 days ago

            i mean it still could lead to something

            not by the current big actors, but sometime in the future hopefully.

            • homes@piefed.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              Oh, I’m sure that’s true, but probably something quite different than what we are being promised and much further down the road. Like how VR was hyped a lot in the early 90s, but we really didn’t get anything like that until quite recently, and it’s not quite the same.

              • lime!@feddit.nu
                link
                fedilink
                arrow-up
                3
                ·
                edit-2
                6 days ago

                yeah, the tech just wasn’t there for vr. just like how llms aren’t the be all end all of generative machine learning models. agents are getting close, but with the tech we currently have there is no way it could reach the promised agi status.

                i actually protested to my professor about this when we were working with neural networks in 2014. were were doing handwriting recognition and i told him “this isn’t ai”. he shot back “oh really? then write me a paper on why” and i couldn’t do it because while i could describe what ai is not, i could not define what it actually is. that feels like the main question we want to be solving for, rather than “how to get statistical text generators to seem clever”.

    • Mniot@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Even this is disappointing. LLM bullshit is only impressively fluent compared to older generative systems. (It is very impressive compared to them. It just should have stayed in academia longer and its components could develop into useful things. Instead everyone’s falling over themselves about a kick-ass demo.)

      • lime!@feddit.nu
        link
        fedilink
        arrow-up
        8
        ·
        6 days ago

        yeah it’s the middle-management thing again. “wow it can answer emails” “wow it can shit out demos” “wow it can follow an api spec”. as internet hippo so aptly put it, they saw that it could do the job of a manager and concluded that it was sentient rather than coming to the correct conclusion that managers aren’t.