• 0 Posts
  • 35 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle









  • Youre right in it needing to be rebooted, and thats sort of what im getting at. The technical side (the idea of how models work) is fine. Problem is big companies fed it with stolen content. And instead of optimising its performance they instead decided to burn even more of our environment.

    I wholeheartedly agree that is evil.

    But assume an entirely “untrained LLM” - its fed with truly proven open source, willingly contributed information, which then runs locally? I think thats ok.

    Double checking the output - thats absolutely correct. I work in software development. I have dabbled with ai code tools. I would not let them ever touch my projects directly. Using one as a sounding board was slightly helpful. but knowing the cost and impact of these models currently, its not worth it.

    Use gallons of water, and a dozen trees to feed the plagiarism machine to do my job for me? Never ever. fuck that.


  • Just edited my reply for context. Hopefully it explains at least my personal view.

    LLMs provided by billionaire techbros? Burn that shit to the ground.

    The scientific idea and application of ai when its helpful and relevant I dont see a problem.

    The difference being no vibe coded ai generated bullshit ends up in the kernel. The use of this technology elsewhere can be completely fine, if its treated correctly.

    But Im totally on your side with “openai llm vibe coded slop should never ever land in the kernel”. And I trust linus on that. given his history with regular 100% human maintainers, he wouldnt let that garbage slide.

    “ai” as a term has become synonymous with openai, anthropic, gemini. theyre just LLM products sold by companies. They should never be near real critical production systems. But the wider scientific/technology side could be applied ethically - without using those LLM products


  • Im not trying to say one way or the other, or take anyone’s side here.

    just putting into context “ai generated code in the linux kernel” isnt whats currently happening.

    unless you have evidence otherwise.

    edit - KDEs response maybe clarifies “ai” to me in this discussion.

    We agree and we agree with many of your objections. AI has become a synonym of tech irresponsibility, greed and exploitation, like crypto was before it. The difference is AI existed before the current craze and pursued legitimate goals. That is still happening in some areas of AI research and ignoring all uses of AI would be throwing the baby out with the bath water.

    LLM providers like OpenAI are scum. But the general technology around “ai” isnt as bad as that