Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

sauce 1

sauce 2

  • Caveman@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    20 hours ago

    As much as I don’t like AI I don’t blame him for using it. Opensource is a thankless sector where maintainers put in massive amount of work for nothing in return. If AI is helping him then all the power to him.

    That being said we don’t know how AI code generated currently will age. We know how the code 3 years ago ended up being slop of hard to maintain code but modern models are a lot more competent. Maybe he shouldn’t have removed the coauthored by Claude thingy but in the end it’s him using a tool and verifying it’s output.

    I used lutris back in the day for playing rocket league and I’d also use it today. I feel we should give this guy the benefit of the doubt for now. If in the future Lutris becomes less stable we should absolutely blame AI but until then I’ll hold off on my judgement.

  • morphite88@thelemmy.club
    link
    fedilink
    English
    arrow-up
    4
    ·
    17 hours ago

    Could someone branch the project off from a point in time when it was safely human-coded and develop from there?

  • missingno@fedia.io
    link
    fedilink
    arrow-up
    45
    arrow-down
    1
    ·
    1 day ago

    If you truly believe AI is so great, own it. Trying to hide it is not a good look. It shows that they know it’s something to be ashamed of.

    • ashughes@feddit.uk
      link
      fedilink
      arrow-up
      19
      arrow-down
      1
      ·
      1 day ago

      Beyond that, I actually consider this to be a violation of open source, if not in letter, at least in spirit. Setting aside the debate about inclusion of LLM-generated code in open source software, I see the obfuscation of the source of that code to be robbing me of my fundamental freedom to truly study the code. It also robs me of my choice to decide where and when I interact with AI in my life.

      Going further, I would love to see FSOSS projects adopt the idea that its not enough to cite what code is LLM-generated, but that citations should include the tool used, the model, and the prompt as well.

      Unfortunately, this move by Lutris forces me to assume all code in Lutris is vibe-coded from this point forward, and that Lutris itself is no longer open source software.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      5
      ·
      1 day ago

      They know people spit slop slop slop slop like a thirsty dog. Every public defense is protesting too much, every quiet effort is conscience of guilt. The nature of bad faith is that there is no right answer.

      We each need private vigilance against participating in public harassment campaigns. Is there any reason these people’s behavior changed, or that they were keeping things quiet, besides the fear of dealing with you?

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    71
    arrow-down
    2
    ·
    1 day ago

    There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

    I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

    He might have had a leg to stand on here if this was an AI that he had trained himself on ethically-sourced data, but personally I don’t want to be lectured by anyone about “our current capitalist culture” who is intentionally playing right into it by financially supporting the companies at the center of the AI bubble. The very corporations that are known to have scraped countless terabytes of unlicensed data for their own for-profit exploitation, by the way.

    If you discard your self-proclaimed values the second that it becomes convenient or “valuable”, you never had any values to begin with.

    Practice what you preach, or don’t preach at all.

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        16 hours ago

        Why? You really don’t see any difference between training an AI model off of public domain, creative commons and licensed data, and corporations like Meta and Anthropic pirating millions of books without even so much as consent from the original authors?

        I wouldn’t have a problem with AI if it was trained legitimately, but sadly working people are being ripped off by massive corporations on an unprecedented scale.

        • BananaIsABerry@lemmy.zip
          link
          fedilink
          arrow-up
          1
          ·
          14 hours ago

          I think that, considering the goal of ensuring the LLM doesn’t directly reproduce the training data, it really doesn’t matter. I don’t think trillions of characters arranged into words so something can spit out the most likely combination of those words back at me really has anything to do with how those words are sourced.

          I also have no issue with piracy and think IP laws are currently way too strongly in favor of IP holders. Maybe my moral compass is off or something, idk.

          • mrmaplebar@fedia.io
            link
            fedilink
            arrow-up
            3
            ·
            12 hours ago

            I think that, considering the goal of ensuring the LLM doesn’t directly reproduce the training data, it really doesn’t matter. I don’t think trillions of characters arranged into words so something can spit out the most likely combination of those words back at me really has anything to do with how those words are sourced.

            To me it’s a question of “fair use”, is it fair for the richest for-profit tech corporations on Earth to scrape every book, painting and song from the internet without so much as basic consent or compensation for the benefit of their shareholders, or not?

            You have to concede that all of these companies, be it OpenAI, Meta, Anthropic, Google, etc., wouldn’t have an LLM product at all without a massive quantity of high-quality training data. Even OpenAI themselves have admitted this fact in court, claiming that it would be impossible for them to achieve the desired result without infringing on other people’s works.

            Are you the type of person who believes that “profit is exploitation” by any chance? Marxism is popular on here, right?

            So let’s forget about copyright and start talking about “exploitation”…

            By far the most influential theory of exploitation ever set forth is that of Karl Marx, who held that workers in a capitalist society are exploited insofar as they are forced to sell their labor power to capitalists for less than the full value of the commodities they produce with their labor. https://plato.stanford.edu/entries/exploitation/#MarxTheoExpl

            They have no product without our labor.

            There is no “OpenAI Studio Ghibli filter” for Altman to profit off of, without the artwork of Hayao Miyazaki, Kazuo Oga, and multitudes of other lower-level workers who are certainly not as well off as the tech billionaires.

            What the “AI” industry all comes down to is an unprecedented exploitation of other people’s intellectual labor for profit. It’s not some great talent equalizer as some delusional people seem to think it is. It is a vehicle by which the richest members of the corporate ownership class are taking the work of the creative class, and have now created an investment bubble in which just about all of the money flows up to the top.

            Over the last few years of this bubble are we seeing any real benefit to society or humanity? No.

            Oligarchs like Altman, Zuckerberg and Musk are the only people reaping the financial benefits of everyone else’s work.

            Your moral compass is probably fine. But like the person above who compared LLMs to pirating photoshop, I think you’re just not seeing the forest for the trees. We can agree to disagree, but I’m not happy about what is effectively modern day robber barons.

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        18 hours ago

        As is FOSS licensed software… Copyright and license notices at the top of every source file.

        So, why should anyone respect the GPL or even the MIT license when they can simply ignore it and exploit the work of the open source community?

          • mrmaplebar@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            16 hours ago

            Well, then I guess you’re not such a fan of “open source” as the developer of Lutris is, because he has chosen to maintain the copyright of his work and license his code under the GPLv3.

            As a believer in FOSS myself, I think it’s hypocritical that he expects people to respect the license attached to his code when he is choosing not to respect the licenses of others.

    • piccolo@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      arrow-down
      24
      ·
      1 day ago

      You can run your own ai models locally. Even if they were trained by the evil corporations. Do you also feel the same way about artists who pirated photoshop? Does that devalue their work?

      • mrmaplebar@fedia.io
        link
        fedilink
        arrow-up
        20
        arrow-down
        3
        ·
        1 day ago

        If this is the best argument the pro-AI crowd has left at this point then you’ve lost all ability to reason…

        Pirating Photoshop is, at worst, taking advantage of Adobe, a multi-billion dollar corporation. They are still very profitable and their employees still got paid to do the work. We can debate the ethics of software piracy all day, and I would argue you’re better off investing your mental energy in FOSS, but in the end I think the social impact of people pirating Photoshop is quite small.

        Compare this to generative AI which is built on the unprecedented exploitation of all human arts, culture and intellectual labor without any form of consent or compensation. All for the benefit of the richest tech oligarchs who are more than happy to sell you a subscription to a product that they stole from the creative class.

        Who is benefitting the most from the AI bubble, the starving artist or the wealthy investor? The thoughtful engineer or the slop slinger? The workers or the suits?

        No matter what way you slice it, you’re not “sticking it to the man”, you are the man. He shouldn’t embarrass himself by blaming “capitalism” when he has shown that he is just as willing to exploit other people’s labor as the next guy–hes just stupid enough to do it for pennies to the AI billionaires dollar.

        • piccolo@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          22 hours ago

          My point was it AI is a tool. You can either use it or you dont. You speak of it being ‘expliotive’ but the world would be much better if copyright didnt exist and intellectual material was simply made available to everyone.

          • mrmaplebar@fedia.io
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            18 hours ago

            Do you create, or just consume?

            A world without copyright would be significantly worse for the people who makes things, like writers, artists, musicians, etc.

            In the real world, with the current laws, nobody should be entitled to exploit other people’s physical or intellectual labor. If profit is exploitation, then why wouldn’t AI be?

            • piccolo@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              14 hours ago

              So you support FOSS? So does that mean you believe source code should be GPL or some other similar license?

              • mrmaplebar@fedia.io
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                13 hours ago

                I do support FOSS, in fact I have written FOSS code as part of my job in the non-profit space for almost a decade. I’m thankful for all of the people who write code whether it’s copyleft GPL or permissive MIT. But I still recognize that it’s their code and that they are simply granting me a license to use it under certain conditions.

                Generative AI takes those conditions and wipe their ass with them. I have a problem with that.

  • ubergeek77@lemmy.ubergeek77.chat
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    1 day ago

    Interesting that the maintainer of Proton-GE personally closed the first issue. Keep an eye on Proton-GE’s quality over the next few months.

  • Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    1 day ago

    A number of weeks ago I noted that one app I use through Lutris for the various settings needed had stopped loading. So I did a lot of looking around to figure out what was going on (didn’t know it was Lutris, I searched mention of issues with the app, with Wine, etc.)

    Finally ran across a bug report in the Lutris github that sounded like my problem. And part of it was how slow some updates filter out, so I ended up doing an uninstall from the manager and manually forcing an update. And all is good now.

    I wonder if the bug was AI driven (don’t even recall what it was, it was a small update that broke things for some people).

    Great to know I should probably expect more fires later. I probably need to see if I can make this app run on my own in Wine. A shame, it’s working fine as is. But I need to be ready.

  • entwine@programming.dev
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    1 day ago

    Well, guess I’m uninstalling Lutris now. I’ll have to manage my library the old fashioned way until a slop-free alternative comes along.

    • Matty_r@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      I was thinking about it - its a massive project. It does a lot of stuff that I’ve never touched, for me personally - all I do is use it to manage my wine prefixes. Obviously there is a lot of extra stuff that goes into it, but I could probably write my own app that does that much at least in a couple weeks.

      I guess this is all to say, its a huge project and for me personally it has a lot of what you might call bloat. Maybe something that pairs all that extra stuff away into optional plugins might be a better approach for a next generation all-in-one launcher

      • anyhow2503@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        You might want to consider Bottles as an alternative for managing Wine prefixes and launching applications.

        • Matty_r@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          15 hours ago

          Thanks for reminding me about it. I’ve tried it once before and didnt like it, but I’ll give it another chance for sure.

  • ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    1 day ago

    welp, another project off my list.
    It was handy as in it enabled me to not require opening EGS, but I haven’t been using EGS lately anyway.
    It’s easier to just stop using it rather than have to write a Firejail profile for it.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 day ago

        I am considering it.
        I will think of it the next time I actually feel the need to use another launcher.

        For now, I am mostly playing GoG games which start directly from a .desktop file and for the few occasions I use Steam, it is fine to just use Steam.

          • ulterno@programming.dev
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            1 day ago

            I don’t need GoG automation.
            I don’t need an EGS automation either. If they didn’t require a launcher I might actually consider buying from them.


            I have a launcher and that is the Operating System

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    21
    ·
    1 day ago

    > entire product loudly denigrated because of new tool used

    Yeah can’t imagine why they’d remove the ‘come have an argument at me’ label.

    I want the bubble to burst so this moral panic will end. Programs can code, now. That’s not going away. Make your peace. We can either leverage this new ability to describe code into existence, and improve all the ways where it demonstrably works okay - or we can pretend that wasn’t the goal of compilers and high-level languages the whole time.

    Oh but this new thing is different; yeah it’s always different, that’s what new means. Neural networks sounded great for decades but had a hard time existing. We finally accepted the bitter lesson that power scales better than cleverness - and hey presto, ‘what’s the next symbol?’ is as smart as a junior developer.

    If you think these fumbling efforts are the best this tool will ever be, we can still extract useful work from it. It’s already a punchline in videos that build some crazy thing the hard way, then have an LLM effortlessly switch languages for speed. Or fight integration hell on their behalf. We’re not doing anyone favors by pretending the problem is the tech. Or by harassing people who work for free on things you like.

    • southsamurai@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      1 day ago

      You know, as much as I dislike the way llms and other models have been made and used by capitalists, I agree with you that the moral panic around it has turned into a form of slop itself.

      It isn’t like people haven’t been dreaming of what the technology could be for decades. And it isn’t like it wasn’t inevitable that something would be created like the various generative models. The only part that’s bad is the execution. Which is extremely fucking bad, and it’s disgusting that it is happening. But that’s not the same thing as the underlying concept and technology being bad.

      • mindbleach@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        1 day ago

        It’s a whole new kind of software.

        A pile of examples can become a working program. Neural networks are universal approximators, and anyone with a video card can now make them. The work they do feels like hard science fiction written by comedians.

        For some reason we’ve only seen two models taken seriously: spicy autocomplete and a denoiser. One is a chatbot that’s just smart enough to get in trouble. The other is CGI for dummies that could make movies as cheap as pen and paper.

        The problem in full is the world’s most obvious bubble forcing these technologies on people. On everybody. The folks who choose this, for themselves, don’t need worrying about. Where it doesn’t work out they’ll pretend it never happened. Where it works, neat. Again: the problem is the force and the scale.

        So yes, an artificial tornado beside your house is intolerable, but it’s obviously not a fundamental problem with the technology. Even an identical quantity of GPUs could simply be spread out, so many buildings merely hum.

        And vegan local models will arise, made from only bespoke licensed data, trained by distributed amateurs. But the big boys shove fancier models into your hands so often that it’d be archaic before it begins… and most people loudly complaining would just keep complaining.

        The identarian performance has to stop. Even folks mumbling ‘it’s awful, you should never,’ usually end with ‘but anyway here’s how I use it.’ The tech is fine. It doesn’t belong in your browser. It doesn’t belong on your keyboard. It doesn’t belong in your goddamn e-mail, before you’ve even read it. But curmudgeons and iconoclasts alike have found utility in this Yes Man improv partner who kinda knows C++. And animators will get real quiet when some product magically in-betweens their drawings.

        Sam Altman is a fraud. Facebook can burn. CUDA must become open-source after Nvidia craters. But five years from now, this wave of AI will still be so commonplace that it’s boring. We will take for granted that computers perform dubious witchcraft.

    • Crackhappy@lemmy.world
      link
      fedilink
      arrow-up
      48
      arrow-down
      3
      ·
      1 day ago

      Here’s the thing. The more you use AI to generate your code, the less likely you are to fully review all of it, understand what it’s doing and be able to fix it when bugs or exploits appear, or even know that they exist. So sure, it might work for now but what about in a couple of years of vibe coding it?

      • piccolo@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        4
        ·
        1 day ago

        Isnt that just an issue of code reviewing? If your accepting subpar code from AI… you’re probably accepting subpar code from humans…

        • rumschlumpel@feddit.org
          link
          fedilink
          arrow-up
          17
          arrow-down
          1
          ·
          1 day ago

          Would have been easier if the original dev(s) continued to work on it themselves, instead of sloppifying the code.

        • 18107@aussie.zone
          link
          fedilink
          English
          arrow-up
          16
          ·
          1 day ago

          Technical debt is a very real thing that has been around for a long time and is well documented.

          AI code is not old enough for the technical debt to have really hit hard yet.

        • forrgott@lemmy.zip
          link
          fedilink
          arrow-up
          9
          arrow-down
          2
          ·
          1 day ago

          Except, of course, that this not an imagined or even unlikely outcome; so, no, by definition your link fits not apply.

          Maybe read what you link???

          • mindbleach@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            1 day ago

            ‘It works fine now, but what about after years of this very recent development?’ is absolutely imagined.

            You wanna argue for it? Argue. Don’t posture.

            • forrgott@lemmy.zip
              link
              fedilink
              arrow-up
              2
              ·
              22 hours ago

              It doesn’t work fine now, though; that’s the whole point. Vibe coding has resulted in numerous public failures, many of them costly. All avoidable.

              The tech debt involved is not some imaginary thing. A bullshit generator on steroids giving you magic coding power is, however, completely imaginary.

              And telling someone else “don’t posture”?!? Then explain YOUR argument! Are you for real?!?

              (╯°□°)╯︵ ┻━┻

  • ikt@aussie.zone
    link
    fedilink
    arrow-up
    13
    arrow-down
    54
    ·
    1 day ago

    this is some real 2022 style complaint

    most developers are using ai in 2026 in some way, it’s simply too good

    • mushroommunk@lemmy.today
      link
      fedilink
      arrow-up
      26
      ·
      1 day ago

      “it’s simply too good”

      Tell that to code reviews I’ve been rejecting because strong disagree. People are using it because they swallowed the snake oil, doesn’t mean we can’t keep fighting against it.

      • entwine@programming.dev
        link
        fedilink
        arrow-up
        7
        ·
        1 day ago

        People are using it because they swallowed the snake oil

        And/or have developed AI psychosis after one too many erotic role play sessions.

        “I get it through my work” yeah, whatever you say Mathieu.

    • mrmaplebar@fedia.io
      link
      fedilink
      arrow-up
      20
      ·
      1 day ago

      I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.

      On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/

      In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic–something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can’t follow the “logic” of what was written. You’re just creating more problems for yourself in the future.

      Personification of probability doesn’t do us any good, open source projects require thoughtful contributions from thinking entities.

      To make matters worse, I think that AI is also not at all suitable for “open source” development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.

      Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? How do you know that the code produced by your LLM is legally safe and not ripped verbatim from someone else’s scraped proprietary codebase? Finally, who is the author and copyright holder of AI generated code?

      Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)

    • magnetosphere@fedia.io
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      Are developers really using AI because it’s “too good”, or is it because management has made the use of AI mandatory?

      • Kissaki@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Some, maybe many, developers use and want to use AI even without management pushing it.

        I’m skeptical and see limited usefulness, but I’ve also heard seemingly different sentiments from colleagues.

    • pixxelkick@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      21
      ·
      1 day ago

      People malding but its the truth.

      You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.

      Its so common now its a waste of time to label it, you should just assume AI was involved at this point.

      • commander@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        15
        ·
        edit-2
        1 day ago

        Where I work, the company has a ChatGPT contract that’s used as a coding assistant tool in VS Code and I imagine also for the admin/contract/legal people doing what they do. Every contracting company developer I’ve worked with, their company has some enterprise ChatGPT/Claude/Gemini/etc. I’ve talked to software developers at large companies that raved about what they could do with enterprise Claude and enterprise Cisco AI coding tools

        Pretty much everyone I know at the minimum uses the Gemini Google search summary for coding questions/dockerfile/kubernetes/open shift/docker compose/helm/terraform/ansible/bash script/python script/snippets/…

        edit: Ineffective activist hive mind people here don’t like hearing people using AI. The first person I knew that made regular use of ChatGPT before I ever opened the webpage was an electrician. Like 2 years ago. He used it to write up his emails to customers. The second I met was a person that did marketing for a local restaurant chain. They used ChatGPT to draft marketing text for emails and mailers. Been doing that for like 2 years now as well

        I remember in the news Level 5 using generative AI to create early idea. Beloved video games Expedition 33 and Arc Raiders use/used generative AI

        2024 article

        https://www.tomshardware.com/tech-industry/artificial-intelligence/over-1000-games-using-generative-ai-content-are-already-available-on-steam-but-are-any-of-them-worth-playing

        2025 article

        https://www.tomshardware.com/video-games/pc-gaming/1-in-5-steam-games-released-in-2025-use-generative-ai-up-nearly-700-percent-year-on-year-7-818-titles-disclose-genai-asset-usage-7-percent-of-entire-steam-library

        If you’re fighting against AI usage in development of anything, strategies of the last few years need to be revisited to determine where improvements can be had

        • anyhow2503@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          I’m glad I don’t work where you do. Unfortunately though, I’ve heard plenty of stories from industry colleagues who have been forced to use AI coding assistants, regardless of any actual impact on productivity or reliability. The consequences of this approach are already manifesting in the form of an embarassing new era of security vulnerabilities. Preaching about the widespread usage of AI is misleading at best if the adoption is mostly based on external marketing pressure. We’ve had this before with other technologies that get pushed hard by sales people.

        • pixxelkick@lemmy.world
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 day ago

          At absolute worst, bare minimum, these tools function as incredibly fast fuzzy intent based searchers on documentation

          Instead of spending 10 minutes on “where the hell is (documentation) Im trying to find” these tools can hunt them down for me in a matter of seconds.

          That already makes them useful just for that, let alone all the other crazy shit they help with now.

        • Lodespawn@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          It seems like the only people who actually derive value out of it are software developers or middle managers. Every other professional discipline has liability and a need to verify accuracy before actioning something. So beyond reading the AI generated summary on a search engine for non critical things it’s basically useless.

        • Kissaki@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          24 hours ago

          I appreciate your comment, even when many others downvote it. Honest experiences like this provide context and should always be upvoted in my eyes.

          You didn’t even make any claims about effectiveness or usefulness. Downvotes like these make me sad and make me feel like this is an unwelcoming community in general, where you can’t have open and honest discussions.