Not sure if this is the best community to post in; please let me know if there’s a more appropriate one. AFAIK [email protected] is meant for news and articles only.
I hate AI because it’s a waste of finite resources.
I hate it because it’s supported by a system of corruption and greed that is destroying the economy.
I hate it because all major AI vendors have supported or abetted criminals in circumventing democracy worldwide.
I hate it because it isn’t AI, it’s a LLM.
I hate “A.I.” because it’s not A.I. it’s an if statement stapled to a dictionary.
Also because I can’t write the short name of Albert without people thinking I’m talking about A.I.
There is no “AI”.
That deception is the main ingredient in the snake oil.
Does it run on something that’s modelled on a neural net? Then it’s AI by definition.
I think you’re confusing AI with “AGI”.
Whose definition?
People who know what they are talking about. But that doesn’t matter to you, does it?
Whom? Name these people.
You are an idiot I am not playing your game I’ll just call you out on being an idiot. If you came across as genuine I would give you a history lesson but you are just an asshole looking to pick a fight. If you could articulate how exactly knowing of John McCarthy and countless others and their contributions would change anything about what you are doing I would be happy to google that for you.
Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?
Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.
Edit II: What fucking “game” was I playing by simply asking you to verify your claims?
Where did anyone from the Dartmouth folks identify “AI” as “anything that runs on a “neural network””?
Edit: Also, I asked two very simple questions. Your response already tells me everything I need to know.
Edit II: What fucking “game” was I playing by asking you to verify your claims?
Lol dude like I said, knowing who wouldn’t change what you are doing.
Por que no los dos?
Because AI - in a very broad sense - is useful.
Machine Learning and the training and use of targeted, specialized inferential models is useful. LLMs and generative content models are not.
Most arguments people make against AI are in my opinion actually arguments against capitalism. Honestly, I agree with all of them, too. Ecological impact? A result of the extractive logic of capitalism. Stagnant wages, unemployment, and economic dismay for regular working people? Gains from AI being extracted by the wealthy elite. The fear shouldn’t be in the technology itself, but in the system that puts profit at all costs over people.
Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).
Enshittification, AI overview being shoved down all our throats? Tactics used to maximize profits tricking us into believing AI products are useful.
Data theft? Data should be a public good where authors are guaranteed a dignified life (decoupled from the sale of their labor).
I’ve seen it said somewhere that, with the advent of AI, society has to embrace UBI or perish, and while that’s an exaggeration it does basically get the point across.
AI is just a tool like anything else. What’s the saying again? "AI doesn’t kill people, capitalism kills people?
I do AI research for climate and other things and it’s absolutely widely used for so many amazing things that objectively improve the world. It’s the gross profit-above-all incentives that have ruined “AI” (in quotes because the general public sees AI as chatbots and funny pictures, when it’s so much more).
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
There is literally no intelligence in a climate model. It’s just data + statistics + compute. Please stop participating in the pseudo-scientific grift.
The quotes are because “AI” doesn’t exist. There are many programs and algorithms being used in a variety of way. But none of them are “intelligent”.
And this is where you show your ignorance. You’re using the colloquial definition for intelligence and applying incorrectly.
By definition, a worm has intelligence. The academic, or biological, definition of intelligence is the ability to make decisions based on a set of available information. It doesn’t mean that something is “smart”, which is how you’re using it.
“Artificial Intelligence” is a specific definition we typically apply to an algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals. We take large amounts of data to train it and it “learns” and “remembers” those specific things. Then when we ask it to process new data it can make an “intelligent” decision on what comes next. That’s how you use the word correctly.
Your ignorance didn’t make you right.
algorithm that’s been modelled after the real world structure and behaviour of neurons and how they process signals
Except the Neural Net model doesn’t actually reproduce everything real, living neurons do. A mathematician in the 70s said, “hey what if this is how brains work?” He didn’t actually study brains, he just put forward a model. It’s a useful model. But it’s also an extreme misrepresentation to say it approximates actual neurons.
lol ok buddy you definitely know more than me
FWIW I think you’re conflating AGI with AI, maybe learn up a little
The term AGI had to be coined because the things they called AI weren’t actually AI. Artificial Intelligence originates from science fiction. It has no strict definition in computer science!
Maybe you learn up a little. Go read Isaac Asimov
Are you talking about AI or LLM branded as LLM?
Actual AI is accurate and efficient because it is designed for specific tasks. Unlike LLM which is just fancy autocomplete.
Actual AI doesn’t exist
FTFY.
LLMs are part of AI, so I think you’re maybe confused. You can say anything is just fancy anything, that doesn’t really hold any weight. You are familiar with autocomplete, so you try to contextualize LLMs in your narrow understanding of this tech. That’s fine, but you should actually read up because the whole field is really neat.
Literally, LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage. Same fundamental mathematics under the hood, but given a humongous scope.
LLMs are extensions of the techniques developed for autocomplete in phones. There’s a direct lineage
That’s not true.
How is this untrue? Generative pre-training is literally training the model to predict what might come next in a given text.
That’s not what an LLM is. That’s part of how it works, but it’s not the whole process.
Even llms are useful for coding, if you keep it in its auto complete lane instead of expecting it to think for you
Just don’t pay a capitalist for it, a tiny, power efficient model that runs on your own pc is more than enough
Yes technology can be useful but that doesn’t make it “intelligent.”
Seriously why are people still promoting auto-complete as “AI” at this point in time? It’s laughable.



